[Python-Dev] [Python-checkins] Daily reference leaks (42917d774476): sum=9

2014-07-04 Thread Brett Cannon
Looks like there is an actual leak found by test_io. Any ideas on what may
have introduced it?

On Fri Jul 04 2014 at 5:01:02 AM,  wrote:

> results for 42917d774476 on branch "default"
> 
>
> test_functools leaked [0, 0, 3] memory blocks, sum=3
> test_io leaked [2, 2, 2] references, sum=6
>
>
> Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R',
> '3:3:/home/antoine/cpython/refleaks/reflogODkfML', '-x']
> ___
> Python-checkins mailing list
> python-check...@python.org
> https://mail.python.org/mailman/listinfo/python-checkins
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3121, 384 Refactoring Issues

2014-07-10 Thread Brett Cannon
[for those that don't know, 3121 is extension module inti/finalization and
384 is the stable ABI]

On Thu Jul 10 2014 at 3:47:03 PM, Mark Lawrence 
wrote:

> I'm just curious as to why there are 54 open issues after both of these
> PEPs have been accepted and 384 is listed as finished.  Did we hit some
> unforeseen technical problem which stalled development?
>

No, the PEPs were fine and were accepted properly. A huge portion of the
open issues are from Robin Schreiber who as part of GSoC 2012 --
https://www.google-melange.com/gsoc/project/details/google/gsoc2012/robin_hood/5668600916475904
-- went through and updated the stdlib to follow the new practices
introduced in the two PEPs. Not sure if there was some policy decision made
that updating the code wasn't worth it or people simply didn't get around
to applying the patches.

-Brett


>
> For these and any other open issues if you need some Windows testing
> doing please feel free to put me on the nosy list and ask for a test run.
>
> --
> My fellow Pythonistas, ask not what our language can do for you, ask
> what you can do for our language.
>
> Mark Lawrence
>
> ---
> This email is free from viruses and malware because avast! Antivirus
> protection is active.
> http://www.avast.com
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3121, 384 Refactoring Issues

2014-07-14 Thread Brett Cannon
On Mon Jul 14 2014 at 11:27:34 AM, "Martin v. Löwis" 
wrote:

> Am 12.07.14 17:19, schrieb Nick Coghlan:
> > Using the stable ABI for standard library extensions also serves to
> > decouple them further from the internal details of the CPython runtime,
> > making it more likely they will be able to run correctly on alternative
> > interpreters (since emulating or otherwise supporting the limited API is
> > easier than supporting the whole thing).
>
> There are two features to be gained for the standard library from that
>
> A. with proper module shutdown support, it will be possible to release
>objects that are currently held in C global/static variables, as the
>C global variables will go away. This, in turn, is a step forward in
>the desire to allow for proper leak-free interpreter shutdown, and
>in the desire to base interpreter shutdown on GC.
>
> B. with proper use of heap types (instead of the static type objects),
>support for the multiple-interpreter feature will be improved, since
>type objects will be per-interpreter, instead of being global. This,
>in turn, is desirable since otherwise state changes can leak from
>one interpreter to the other.
>
> So I still maintain that the change is desirable even for the standard
> library.
>

I agree for PEP  3121 which is the initialization/finalization work. The
stable ABi is not necessary. So maybe we should re-examine the patches and
accept the bits that clean up init/finalization and leave out any
ABi-related changes.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Job Board

2014-07-14 Thread Brett Cannon
On Mon Jul 14 2014 at 12:17:03 PM, Ethan Furman  wrote:

> has now been dead for five months.
>

This is the wrong place to ask about this. It falls under the purview of
the web site who you can email at webmaster@ or submit an issue at
https://github.com/python/pythondotorg . But I know from PSF status reports
that it's being actively rewritten and fixed to make it manageable for more
than one person to run easily.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remaining decisions on PEP 471 -- os.scandir()

2014-07-20 Thread Brett Cannon
Oh yes. :) The file Antoine is referring to is the implementation of import.

On Sun, Jul 20, 2014, 17:34 Ben Hoyt  wrote:

> > Have you tried modifying importlib's _bootstrap.py to use scandir()
> instead
> > of listdir() + stat()?
>
> No, I haven't -- I'm not familiar with that code. What does
> _bootstrap.py do -- does it do a lot of listdir calls and stat-ing of
> many files?
>
> -Ben
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does Zip Importer have to be Special?

2014-07-24 Thread Brett Cannon
On Thu Jul 24 2014 at 1:07:12 PM, Phil Thompson 
wrote:

> I have an importer for use in applications that embed an interpreter
> that does a similar job to the Zip importer (except that the storage is
> a C data structure rather than a .zip file). Just like the Zip importer
> I need to import my importer and add it to sys.path_hooks. However the
> earliest opportunity I have to do this is after the Py_Initialize() call
> returns - but this is too late because some parts of the standard
> library have already needed to be imported.
>
> My current workaround is to include a modified version of _bootstrap.py
> as a frozen module that has the necessary steps added to the end of its
> _install() function.
>
> The Zip importer doesn't have this problem because it gets special
> treatment - the call to its equivalent code is hard-coded and happens
> exactly when needed.
>
> What would help is a table of functions that were called where
> _PyImportZip_Init() is currently called. By default the only entry in
> the table would be _PyImportZip_Init. There would be a way of modifying
> the table, either like how PyImport_FrozenModules is handled or how
> Inittab is handled.
>
> ...or if there is a better solution that I have missed that doesn't
> require a modified _bootstrap.py.
>

Basically you want a way to specify arguments into
importlib._bootstrap._install() so that sys.path_hooks and sys.meta_path
were configurable instead of hard-coded (it could also be done just past
importlib being installed, but that's a minor detail). Either way there is
technically no reason not to allow for it, just lack of motivation since
this would only come up for people who embed the interpreter AND have a
custom importer which affects loading the stdlib as well (any reason you
can't freeze the stdblib as a solution?).

We could go the route of some static array that people could modify.
Another option would be to allow for the specification of a single function
which is called just prior to importing the rest of the stdlib,

The problem with all of this is you are essentially asking for a hook to
let you have code have access to the interpreter state before it is fully
initialized. Zipimport and the various bits of code that get loaded during
startup are special since they are coded to avoid touching anything that
isn't ready to be used. So if we expose something that allows access prior
to full initialization it would have to be documented as having no
guarantees of interpreter state, etc. so we are not held to some API that
makes future improvements difficult.

IOW allowing for easy patching of Python is probably the best option I can
think of. Would tweaking importlib._bootstrap._install() to accept
specified values for sys.meta_path and sys.path_hooks be enough so that you
can change the call site for those functions?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does Zip Importer have to be Special?

2014-07-24 Thread Brett Cannon
On Thu Jul 24 2014 at 2:12:20 PM, Phil Thompson 
wrote:

> On 24/07/2014 6:48 pm, Brett Cannon wrote:
> > On Thu Jul 24 2014 at 1:07:12 PM, Phil Thompson
> > 
> > wrote:
> >
> >> I have an importer for use in applications that embed an interpreter
> >> that does a similar job to the Zip importer (except that the storage
> >> is
> >> a C data structure rather than a .zip file). Just like the Zip
> >> importer
> >> I need to import my importer and add it to sys.path_hooks. However the
> >> earliest opportunity I have to do this is after the Py_Initialize()
> >> call
> >> returns - but this is too late because some parts of the standard
> >> library have already needed to be imported.
> >>
> >> My current workaround is to include a modified version of
> >> _bootstrap.py
> >> as a frozen module that has the necessary steps added to the end of
> >> its
> >> _install() function.
> >>
> >> The Zip importer doesn't have this problem because it gets special
> >> treatment - the call to its equivalent code is hard-coded and happens
> >> exactly when needed.
> >>
> >> What would help is a table of functions that were called where
> >> _PyImportZip_Init() is currently called. By default the only entry in
> >> the table would be _PyImportZip_Init. There would be a way of
> >> modifying
> >> the table, either like how PyImport_FrozenModules is handled or how
> >> Inittab is handled.
> >>
> >> ...or if there is a better solution that I have missed that doesn't
> >> require a modified _bootstrap.py.
> >>
> >
> > Basically you want a way to specify arguments into
> > importlib._bootstrap._install() so that sys.path_hooks and
> > sys.meta_path
> > were configurable instead of hard-coded (it could also be done just
> > past
> > importlib being installed, but that's a minor detail). Either way there
> > is
> > technically no reason not to allow for it, just lack of motivation
> > since
> > this would only come up for people who embed the interpreter AND have a
> > custom importer which affects loading the stdlib as well (any reason
> > you
> > can't freeze the stdblib as a solution?).
>
> Not really. I'd lose the compression my importer implements.
>
> (Are there any problems with freezing packages rather than simple
> modules?)
>

Nope, modules and packages are both supported.


>
> > We could go the route of some static array that people could modify.
> > Another option would be to allow for the specification of a single
> > function
> > which is called just prior to importing the rest of the stdlib,
> >
> > The problem with all of this is you are essentially asking for a hook
> > to
> > let you have code have access to the interpreter state before it is
> > fully
> > initialized. Zipimport and the various bits of code that get loaded
> > during
> > startup are special since they are coded to avoid touching anything
> > that
> > isn't ready to be used. So if we expose something that allows access
> > prior
> > to full initialization it would have to be documented as having no
> > guarantees of interpreter state, etc. so we are not held to some API
> > that
> > makes future improvements difficult.
> >
> > IOW allowing for easy patching of Python is probably the best option I
> > can
> > think of. Would tweaking importlib._bootstrap._install() to accept
> > specified values for sys.meta_path and sys.path_hooks be enough so that
> > you
> > can change the call site for those functions?
>
> My importer runs under PathFinder so it needs sys.path as well (and
> doesn't need sys.meta_path).
>

sys.path can be set via PYTHONPATH, etc. so that shouldn't be as much of an
issue.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Contribute to Python.org

2014-07-29 Thread Brett Cannon
On Tue Jul 29 2014 at 4:52:14 PM agrim khanna  wrote:

> Respected Sir/Madam,
>
> I have installed the setup on my machine and have compiled and run it as
> well. I was unable to figure out how to make a patch and how to find a
> suitable bug for me to fix. I request you to guide me in the same.
>

How to make a patch is in the devguide which was sent to you in your last
email: https://docs.python.org/devguide/patch.html

Finding issues is also covered in the devguide as well as you are able to
ask for help on the core-mentoship mailing list (also in the last email
sent to you: http://pythonmentors.com/).


>
> Yours Sincerely,
> Agrim Khanna
> IIIT-Allahabad, India
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] Daily reference leaks (09f56fdcacf1): sum=21004

2014-08-07 Thread Brett Cannon
test_codecs is not happy. Looking at the subject lines of commit emails
from the past day I don't see any obvious cause.

On Thu Aug 07 2014 at 4:35:05 AM  wrote:

> results for 09f56fdcacf1 on branch "default"
> 
>
> test_codecs leaked [5825, 5825, 5825] references, sum=17475
> test_codecs leaked [1172, 1174, 1174] memory blocks, sum=3520
> test_collections leaked [0, 2, 0] references, sum=2
> test_functools leaked [0, 0, 3] memory blocks, sum=3
> test_site leaked [0, 2, 0] references, sum=2
> test_site leaked [0, 2, 0] memory blocks, sum=2
>
>
> Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R',
> '3:3:/home/antoine/cpython/refleaks/reflogdA4OO6', '-x']
> ___
> Python-checkins mailing list
> python-check...@python.org
> https://mail.python.org/mailman/listinfo/python-checkins
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Bytes path support

2014-08-20 Thread Brett Cannon
On Wed Aug 20 2014 at 9:02:25 AM Antoine Pitrou  wrote:

> Le 20/08/2014 07:08, Nick Coghlan a écrit :
> >
> > It's not just the JVM that says text and binary APIs should be separate
> > - it's every widely used operating system services layer except POSIX.
> > The POSIX way works well *if* everyone reliably encodes things as UTF-8
> > or always uses encoding detection, but its failure mode is unfortunately
> > silent data corruption.
> >
> > That said, there's a lot of Python software that is POSIX specific,
> > where bytes paths would be the least of the barriers to porting to
> > Windows or Jython. I'm personally +1 on consistently allowing binary
> > paths in lower level APIs, but disallowing them in higher level
> > explicitly cross platform abstractions like pathlib.
>
> I fully agree with Nick's position here.
>
> To elaborate specifically about pathlib, it doesn't handle bytes paths
> but allows you to generate them if desired:
> https://docs.python.org/3/library/pathlib.html#operators
>
> Adding full bytes support to pathlib would have added a lot of
> complication and fragility in the implementation *and* in the API (is it
> allowed to combine str and bytes paths? should they have separate
> classes?), for arguably little benefit.
>
> I think if you want low-level features (such as unconverted bytes paths
> under POSIX), it is reasonable to point you to low-level APIs.
>

+1 from me as well. Allowing the low-level stuff work on bytes but keeping
high-level actually high-level keeps with our consenting adults policy as
well as making things possible, but not at the detriment of the common
case.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython and parallel make

2014-09-01 Thread Brett Cannon
 On Mon, Sep 1, 2014, 15:16 Victor Stinner  wrote:

Hi,

My bashrc sets MAKEFLAGS to -j9 and Python compilation works fine on Fedora
20 with GNU make and GCC. My computer has 8 cores (4 physical with hyper
threading).

It looks like your compiler is Clang. What is your OS and OS

version?

I compile with -j8 with Clang on OS X and never have issues.

-Brett

 Can you try to run make in verbose mode and attach the full log to your
email? Ex: try make SHELL="bash -x" to see executed shell commands. (Run
"make clean" before)

Victor

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/brett%40python.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython and parallel make

2014-09-05 Thread Brett Cannon
Open an issue on bugs.python.org and attach the patch there (it should also
ask you so sign the contributor agreement, but if not then please also sign
that).

On Fri Sep 05 2014 at 12:52:45 PM Jonas Wagner  wrote:

> Hi again,
>
> the attached Makefile patch seems to fix the parallel build problems.
>
> Is there a chance of getting this into trunk? Should I open an issue
> or send the patch somewhere else?
>
> Cheers,
> Jonas
>
> On Fri, Sep 5, 2014 at 12:15 PM, Jonas Wagner 
> wrote:
> >>> > Would people be interested in having a parallel version?
> >>>
> >>> See http://bugs.python.org/issue5309
> >>
> >> Cool! I'll look into this.
> >
> > The patch there works well for me. I've made one small update, and
> > submitted the new version in the bug tracker.
> >
> > Regarding the other build problem, I might have found some hint:
> > - Parser/pgen.o ends up in both the PARSER_OBJS and PGENOBJS variables
> > in the Makefile
> > - PARSER_OBJS is depended upon in a few places, hence it could be that
> > make starts to build Parser/pgen.o
> > - PGENOBJS is built when building PGEN, which happens *in a different
> > make that is called recursively*
> >
> > I think the culprit is the rule for GRAMMAR_H which calls make
> > recursively. Is there a reason that GRAMMAR_H has to generate PGEN
> > like this? Couldn't it just depend on PGEN?
> >
> > Cheers,
> > Jonas
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Interactive Grid for Sorting, Filtering DataFrames in IPython Notebook

2014-10-07 Thread Brett Cannon
Python-dev is for discussing the development *of* Python, not *with* it.
This kind of thing is more appropriate for python-list.

On Tue Oct 07 2014 at 11:49:37 AM tshawver  wrote:

> As part of the work on our research environment at  Quantopian
>   , I've been building an extension
> which renders pandas DataFrames as interactive grids in the IPython
> notebook.  The extension uses a Javascript library called SlickGrid to
> render the grids, and the current state of the project can be found here on
> GitHub: https://github.com/quantopian/qgrid
>
> The extension is also viewable in nbviewer:
> http://nbviewer.ipython.org/github/quantopian/qgrid/blob/
> master/qgrid_demo.ipynb
>
> The GitHub repository contains some explanation of why I chose to implement
> the grid as a Python package rather than a standard IPython notebook
> extension, which might be interesting for other people who are looking to
> add functionality to the notebook in a similar way.
>
> -Tim
>
>
>
> --
> View this message in context: http://python.6.x6.nabble.com/
> Interactive-Grid-for-Sorting-Filtering-DataFrames-in-
> IPython-Notebook-tp5073931.html
> Sent from the Python - python-dev mailing list archive at Nabble.com.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] performance delta with .py presence v.s. only pyc on python 2.7.x?

2014-10-07 Thread Brett Cannon
On Tue Oct 07 2014 at 2:24:52 PM Skip Montanaro 
wrote:

> On Tue, Oct 7, 2014 at 12:46 PM, John Smith  wrote:
> > pyc-only install sees mediocre performance. (pyc's are built using
> > compileall.py, then source .py's removed before packaging)
>
> (Warning: it's been probably a decade since I looked at any of this
> stuff, so treat this response as mere conjecture.)
>
> I'd take a look at startup time and things like stat(2) calls in the
> presence or absence of .py files. It's possible that it tries all
> other possible file extensions before considering .pyc. If you have a
> long sys.path, it would then run through all the other file extension
> possibilities before trying the .pyc. OTOH, if the .py is present, it
> might be found early in the search, then as an optimization, look for
> a .pyc file it can use rather than compiling the .py file. How long is
> sys.path?
>

The extension check is per sys.path entry so sys.path is the outer loop,
not the file extension list. The relevant code is all in
https://hg.python.org/cpython/file/05f70805f37f/Python/import.c for Python
2.7  and the search code is
https://hg.python.org/cpython/file/05f70805f37f/Python/import.c#l1291 .

But with the code being a black box there is no good way to answer this
question. E.g. if they have a custom finder that is very costly when there
is no source then that could explain this. But you're talking **app**
performance and not import performance, so either something on your system
or in that code is very quirky that is leading to an actual performance
loss to that level (import costs are usually washed out and bytecode is
literally just the internal representation of source after compilation so
there is no semantic difference at execution time if the same source is
used).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?

2014-11-03 Thread Brett Cannon
On Mon Nov 03 2014 at 5:31:21 AM Ethan Furman  wrote:

> Just to be clear, this is about NotImplemented, not NotImplementedError.
>
> tl;dr  When a binary operation fails, should an exception be raised or
> NotImplemented returned?
>

The docs for NotImplemented suggest it's only for rich comparison methods
and not all binary operators:
https://docs.python.org/3/library/constants.html#NotImplemented . But then
had I not read that I would have said all binary operator methods should
return NotImplemented when the types are incompatible.

-Brett


>
>
> When a binary operation in Python is attempted, there are two
> possibilities:
>
>- it can work
>- it can't work
>
> The main reason [1] that it can't work is that the two operands are of
> different types, and the first type does not know
> how to deal with the second type.
>
> The question then becomes: how does the first type tell Python that it
> cannot perform the requested operation?  The most
> obvious answer is to raise an exception, and TypeError is a good
> candidate.  The problem with the exception raising
> approach is that once an exception is raised, Python doesn't try anything
> else to make the operation work.
>
> What's wrong with that?  Well, the second type might know how to perform
> the operation, and in fact that is why we have
> the reflected special methods, such as __radd__ and __rmod__ -- but if the
> first type raises an exception the __rxxx__
> methods will not be tried.
>
> Okay, how can the first type tell Python that it cannot do what is
> requested, but to go ahead and check with the second
> type to see if it does?  That is where NotImplemented comes in -- if a
> special method (and only a special method)
> returns NotImplemented then Python will check to see if there is anything
> else it can do to make the operation succeed;
> if all attempts return NotImplemented, then Python itself will raise an
> appropriate exception [2].
>
> In an effort to see how often NotImplemented is currently being returned I
> crafted a test script [3] to test the types
> bytes, bytearray, str, dict, list, tuple, Enum, Counter, defaultdict,
> deque, and OrderedDict with the operations for
> __add__, __and__, __floordiv__, __iadd__, __iand__, __ifloordiv__,
> __ilshift__, __imod__, __imul__, __ior__, __ipow__,
> __irshift__, __isub__, __itruediv__, __ixor__, __lshift__, __mod__,
> __mul__, __or__, __pow__, __rshift__, __sub__,
> __truediv__, and __xor__.
>
> Here are the results of the 275 tests:
> 
> 
> testing control...
>
> ipow -- Exception  and 'subtype'> raised
> errors in Control -- misunderstanding or bug?
>
> testing types against a foreign class
>
> iadd(Counter()) -- Exception <'SomeOtherClass' object has no attribute
> 'items'> raised instead of TypeError
> iand(Counter()) -- NotImplemented not returned, TypeError not raised
> ior(Counter()) -- Exception <'SomeOtherClass' object has no attribute
> 'items'> raised instead of TypeError
> isub(Counter()) -- Exception <'SomeOtherClass' object has no attribute
> 'items'> raised instead of TypeError
>
>
> testing types against a subclass
>
> mod(str()) -- NotImplemented not returned, TypeError not raised
>
> iadd(Counter()) -- Exception <'subtype' object has no attribute 'items'>
> raised (should have worked)
> iand(Counter()) -- NotImplemented not returned, TypeError not raised
> ior(Counter()) -- Exception <'subtype' object has no attribute 'items'>
> raised (should have worked)
> isub(Counter()) -- Exception <'subtype' object has no attribute 'items'>
> raised (should have worked)
> 
> 
>
> Two observations:
>
>- __ipow__ doesn't seem to behave properly in the 3.x line (that error
> doesn't show up when testing against 2.7)
>
>- Counter should be returning NotImplemented instead of raising an
> AttributeError, for three reasons [4]:
>  - a TypeError is more appropriate
>  - subclasses /cannot/ work with the current implementation
>  - __iand__ is currently a silent failure if the Counter is empty, and
> the other operand should trigger a failure
>
> Back to the main point...
>
> So, if my understanding is correct:
>
>- NotImplemented is used to signal Python that the requested operation
> could not be performed
>- it should be used by the binary special methods to signal type
> mismatch failure, so any subclass gets a chance to work.
>
> Is my understanding correct?  Is this already in the docs somewhere, and I
> just missed it?
>
> --
> ~Ethan~
>
> [1] at least, it's the main reason in my code
> [2] usually a TypeError, stating either that the operation is not
> supported, or the types are unorderable
> [3] test script at the end
> [4] https://bugs.python.org/issue22766 [returning NotImplemented was
> rejected]
>
> -- 8< 
> 

Re: [Python-Dev] [Python-checkins] cpython (2.7): #22650: test suite: load Unicode test data files from www.pythontest.net

2014-11-06 Thread Brett Cannon
What is pythontest.net? Is it something we control, and if so how do we add
things to it for tests? Did I miss an email on python-dev or
python-committers about this?

On Thu Nov 06 2014 at 8:57:22 AM georg.brandl 
wrote:

> https://hg.python.org/cpython/rev/0af36ea1d010
> changeset:   93417:0af36ea1d010
> branch:  2.7
> parent:  93401:3e8d3c4bc17e
> user:Georg Brandl 
> date:Thu Nov 06 14:37:49 2014 +0100
> summary:
>   #22650: test suite: load Unicode test data files from www.pythontest.net
>
> files:
>   Lib/test/test_codecmaps_cn.py  |   8 +++-
>   Lib/test/test_codecmaps_hk.py  |   2 +-
>   Lib/test/test_codecmaps_jp.py  |  12 +---
>   Lib/test/test_codecmaps_kr.py  |   8 +++-
>   Lib/test/test_codecmaps_tw.py  |   6 ++
>   Lib/test/test_normalization.py |   2 +-
>   6 files changed, 15 insertions(+), 23 deletions(-)
>
>
> diff --git a/Lib/test/test_codecmaps_cn.py b/Lib/test/test_codecmaps_cn.py
> --- a/Lib/test/test_codecmaps_cn.py
> +++ b/Lib/test/test_codecmaps_cn.py
> @@ -10,19 +10,17 @@
>  class TestGB2312Map(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'gb2312'
> -mapfileurl = 'http://people.freebsd.org/~perky/i18n/EUC-CN.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/EUC-CN.TXT'
>
>  class TestGBKMap(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'gbk'
> -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/VENDORS/' \
> - 'MICSFT/WINDOWS/CP936.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/CP936.TXT'
>
>  class TestGB18030Map(test_multibytecodec_support.TestBase_Mapping,
>   unittest.TestCase):
>  encoding = 'gb18030'
> -mapfileurl = 'http://source.icu-project.org/repos/icu/data/' \
> - 'trunk/charset/data/xml/gb-18030-2000.xml'
> +mapfileurl = 'http://www.pythontest.net/unicode/gb-18030-2000.xml'
>
>
>  def test_main():
> diff --git a/Lib/test/test_codecmaps_hk.py b/Lib/test/test_codecmaps_hk.py
> --- a/Lib/test/test_codecmaps_hk.py
> +++ b/Lib/test/test_codecmaps_hk.py
> @@ -10,7 +10,7 @@
>  class TestBig5HKSCSMap(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'big5hkscs'
> -mapfileurl = 'http://people.freebsd.org/~
> perky/i18n/BIG5HKSCS-2004.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/BIG5HKSCS-2004.TXT'
>
>  def test_main():
>  test_support.run_unittest(__name__)
> diff --git a/Lib/test/test_codecmaps_jp.py b/Lib/test/test_codecmaps_jp.py
> --- a/Lib/test/test_codecmaps_jp.py
> +++ b/Lib/test/test_codecmaps_jp.py
> @@ -10,8 +10,7 @@
>  class TestCP932Map(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'cp932'
> -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/'
> \
> - 'WINDOWS/CP932.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/CP932.TXT'
>  supmaps = [
>  ('\x80', u'\u0080'),
>  ('\xa0', u'\uf8f0'),
> @@ -27,15 +26,14 @@
>   unittest.TestCase):
>  encoding = 'euc_jp'
>  mapfilename = 'EUC-JP.TXT'
> -mapfileurl = 'http://people.freebsd.org/~perky/i18n/EUC-JP.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/EUC-JP.TXT'
>
>
>  class TestSJISCOMPATMap(test_multibytecodec_support.TestBase_Mapping,
>  unittest.TestCase):
>  encoding = 'shift_jis'
>  mapfilename = 'SHIFTJIS.TXT'
> -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/OBSOLETE' \
> - '/EASTASIA/JIS/SHIFTJIS.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/SHIFTJIS.TXT'
>  pass_enctest = [
>  ('\x81_', u'\\'),
>  ]
> @@ -49,14 +47,14 @@
>   unittest.TestCase):
>  encoding = 'euc_jisx0213'
>  mapfilename = 'EUC-JISX0213.TXT'
> -mapfileurl = 'http://people.freebsd.org/~perky/i18n/EUC-JISX0213.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/EUC-JISX0213.TXT'
>
>
>  class TestSJISX0213Map(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'shift_jisx0213'
>  mapfilename = 'SHIFT_JISX0213.TXT'
> -mapfileurl = 'http://people.freebsd.org/~
> perky/i18n/SHIFT_JISX0213.TXT'
> +mapfileurl = 'http://www.pythontest.net/unicode/SHIFT_JISX0213.TXT'
>
>
>  def test_main():
> diff --git a/Lib/test/test_codecmaps_kr.py b/Lib/test/test_codecmaps_kr.py
> --- a/Lib/test/test_codecmaps_kr.py
> +++ b/Lib/test/test_codecmaps_kr.py
> @@ -10,14 +10,13 @@
>  class TestCP949Map(test_multibytecodec_support.TestBase_Mapping,
> unittest.TestCase):
>  encoding = 'cp949'
> -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT'
> \
> - '/WINDOWS/CP949.TXT'
> +mapfileurl = 'http://www.py

Re: [Python-Dev] [Python-checkins] cpython (2.7): #22650: test suite: load Unicode test data files from www.pythontest.net

2014-11-06 Thread Brett Cannon
Ah, cool! Just an FYI, the index.html file is not being served for me.

-Brett

On Thu Nov 06 2014 at 9:41:59 AM Benjamin Peterson 
wrote:

>
>
> On Thu, Nov 6, 2014, at 09:39, Brett Cannon wrote:
> > What is pythontest.net? Is it something we control, and if so how do we
> > add
> > things to it for tests? Did I miss an email on python-dev or
> > python-committers about this?
>
> See https://bugs.python.org/issue22650
>
> >
> > On Thu Nov 06 2014 at 8:57:22 AM georg.brandl
> > 
> > wrote:
> >
> > > https://hg.python.org/cpython/rev/0af36ea1d010
> > > changeset:   93417:0af36ea1d010
> > > branch:  2.7
> > > parent:  93401:3e8d3c4bc17e
> > > user:Georg Brandl 
> > > date:Thu Nov 06 14:37:49 2014 +0100
> > > summary:
> > >   #22650: test suite: load Unicode test data files from
> www.pythontest.net
> > >
> > > files:
> > >   Lib/test/test_codecmaps_cn.py  |   8 +++-
> > >   Lib/test/test_codecmaps_hk.py  |   2 +-
> > >   Lib/test/test_codecmaps_jp.py  |  12 +---
> > >   Lib/test/test_codecmaps_kr.py  |   8 +++-
> > >   Lib/test/test_codecmaps_tw.py  |   6 ++
> > >   Lib/test/test_normalization.py |   2 +-
> > >   6 files changed, 15 insertions(+), 23 deletions(-)
> > >
> > >
> > > diff --git a/Lib/test/test_codecmaps_cn.py
> b/Lib/test/test_codecmaps_cn.py
> > > --- a/Lib/test/test_codecmaps_cn.py
> > > +++ b/Lib/test/test_codecmaps_cn.py
> > > @@ -10,19 +10,17 @@
> > >  class TestGB2312Map(test_multibytecodec_support.TestBase_Mapping,
> > > unittest.TestCase):
> > >  encoding = 'gb2312'
> > > -mapfileurl = 'http://people.freebsd.org/~perky/i18n/EUC-CN.TXT'
> > > +mapfileurl = 'http://www.pythontest.net/unicode/EUC-CN.TXT'
> > >
> > >  class TestGBKMap(test_multibytecodec_support.TestBase_Mapping,
> > > unittest.TestCase):
> > >  encoding = 'gbk'
> > > -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/VENDORS/' \
> > > - 'MICSFT/WINDOWS/CP936.TXT'
> > > +mapfileurl = 'http://www.pythontest.net/unicode/CP936.TXT'
> > >
> > >  class TestGB18030Map(test_multibytecodec_support.TestBase_Mapping,
> > >   unittest.TestCase):
> > >  encoding = 'gb18030'
> > > -mapfileurl = 'http://source.icu-project.org/repos/icu/data/' \
> > > - 'trunk/charset/data/xml/gb-18030-2000.xml'
> > > +mapfileurl = 'http://www.pythontest.net/unicode/gb-18030-2000.xml
> '
> > >
> > >
> > >  def test_main():
> > > diff --git a/Lib/test/test_codecmaps_hk.py
> b/Lib/test/test_codecmaps_hk.py
> > > --- a/Lib/test/test_codecmaps_hk.py
> > > +++ b/Lib/test/test_codecmaps_hk.py
> > > @@ -10,7 +10,7 @@
> > >  class TestBig5HKSCSMap(test_multibytecodec_support.TestBase_Mapping,
> > > unittest.TestCase):
> > >  encoding = 'big5hkscs'
> > > -mapfileurl = 'http://people.freebsd.org/~
> > > perky/i18n/BIG5HKSCS-2004.TXT'
> > > +mapfileurl = 'http://www.pythontest.net/uni
> code/BIG5HKSCS-2004.TXT'
> > >
> > >  def test_main():
> > >  test_support.run_unittest(__name__)
> > > diff --git a/Lib/test/test_codecmaps_jp.py
> b/Lib/test/test_codecmaps_jp.py
> > > --- a/Lib/test/test_codecmaps_jp.py
> > > +++ b/Lib/test/test_codecmaps_jp.py
> > > @@ -10,8 +10,7 @@
> > >  class TestCP932Map(test_multibytecodec_support.TestBase_Mapping,
> > > unittest.TestCase):
> > >  encoding = 'cp932'
> > > -mapfileurl = 'http://www.unicode.org/Public/MAPPINGS/VENDORS/
> MICSFT/'
> > > \
> > > - 'WINDOWS/CP932.TXT'
> > > +mapfileurl = 'http://www.pythontest.net/unicode/CP932.TXT'
> > >  supmaps = [
> > >  ('\x80', u'\u0080'),
> > >  ('\xa0', u'\uf8f0'),
> > > @@ -27,15 +26,14 @@
> > >   unittest.TestCase):
> > >  encoding = 'euc_jp'
> > >  mapfilename = 'EUC-JP.TXT'
> > > -mapfileurl = 'http://people.freebsd.org/~perky/i18n/EUC-JP.TXT'
&g

Re: [Python-Dev] Static checker for common Python programming errors

2014-11-17 Thread Brett Cannon
On Mon Nov 17 2014 at 12:06:15 PM Stefan Bucur 
wrote:

> Mark, thank you for the pointer! I will re-send my message there. Should I
> include both mailing lists in a single thread if I end up receiving replies
> from both?


No as cross-posting becomes just a nightmare of moderation when someone is
not on both lists; please only post to a single mailing list.

-Brett


>
> Cheers,
> Stefan
>
>
> On Mon Nov 17 2014 at 4:04:45 PM Mark Shannon  wrote:
>
>> Hi,
>>
>> I think this might be a bit off-topic for this mailing list,
>> code-qual...@python.org is the place for discussing static analysis
>> tools.
>>
>> Although if anyone does have any comments on any particular checks
>> they would like, I would be interested as well.
>>
>> Cheers,
>> Mark.
>>
>>
>> On 17/11/14 14:49, Stefan Bucur wrote:
>> > I'm developing a Python static analysis tool that flags common
>> > programming errors in Python programs. The tool is meant to complement
>> > other tools like Pylint (which perform checks at lexical and syntactic
>> > level) by going deeper with the code analysis and keeping track of the
>> > possible control flow paths in the program (path-sensitive analysis).
>> >
>> > For instance, a path-sensitive analysis detects that the following
>> > snippet of code would raise an AttributeError exception:
>> >
>> > if object is None: # If the True branch is taken, we know the object is
>> None
>> >object.doSomething() # ... so this statement would always fail
>> >
>> > I'm writing first to the Python developers themselves to ask, in their
>> > experience, what common pitfalls in the language & its standard library
>> > such a static checker should look for. For instance, here [1] is a list
>> > of static checks for the C++ language, as part of the Clang static
>> > analyzer project.
>> >
>> > My preliminary list of Python checks is quite rudimentary, but maybe
>> > could serve as a discussion starter:
>> >
>> > * Proper Unicode handling (for 2.x)
>> >- encode() is not called on str object
>> >- decode() is not called on unicode object
>> > * Check for integer division by zero
>> > * Check for None object dereferences
>> >
>> > Thanks a lot,
>> > Stefan Bucur
>> >
>> > [1] http://clang-analyzer.llvm.org/available_checks.html
>> >
>> >
>> >
>> > ___
>> > Python-Dev mailing list
>> > Python-Dev@python.org
>> > https://mail.python.org/mailman/listinfo/python-dev
>> > Unsubscribe: https://mail.python.org/mailman/options/python-dev/
>> mark%40hotpy.org
>> >
>>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-21 Thread Brett Cannon
On Fri Nov 21 2014 at 7:37:13 AM Nick Coghlan  wrote:

> For those that aren't aware, PEP 474 is a PEP I wrote a while back
> suggesting we set up a "forge.python.org" service that provides easier
> management of Mercurial repos that don't have the complex branching
> requirements of the main CPython repo. Think repos like the PEPs repo,
> or the developer guide repo.
>
> The primary objective of the PEP is to enable an online editing + pull
> request style workflow for these pure documentation projects that only
> have a single active branch.
>
> I'd been taking "must be hosted in PSF infrastructure" as a hard
> requirement, but MAL pointed out earlier this evening that in the age
> of DVCS's, that requirement may not make sense: if you avoid tightly
> coupling your automation to a particular DVCS host's infrastructure,
> then reverting back to self-hosting (if that becomes necessary for
> some reason) is mostly just a matter of "hg push".
>

I don't view self-hosting as ever being a requirement. We hosted ourselves
mainly to fully control commit messages (we do like to be very explicit
after all =). Because we didn't want to pollute our message log with
people's own messages which didn't follow our commit log guidelines or were
of high enough caliber, we chose to fully control the hosting so as to not
give people a false hope that we would accept a pull request.


>
> If that "must be self-hosted" constraint is removed, then the obvious
> candidate for Mercurial hosting that supports online editing + pull
> requests is the PSF's BitBucket account.
>

There's also CodePlex and (ironically) SourceForge for open-source hg
hosting.


>
> There'd still be some work in such a change to make sure we didn't
> break automated regeneration of associated site elements, but that's
> still a lot simpler than adding an entirely new piece of
> infrastructure.
>
> If folks are amenable to that variant of the idea, I'll undefer PEP
> 474 and revise it accordingly, with the developer guide and the PEP's
> repo as the initially proposed candidates for transfer.
>

I think showing us how to ignore PR comments and only show those from
merges and direct commits on a branch (e.g. blame, reading log output,
etc.) would help, i.e. how to work with Mercurial so I only see commit
messages from core developers or ones they directly could edit?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-21 Thread Brett Cannon
On Fri Nov 21 2014 at 8:57:15 AM Nick Coghlan  wrote:

> On 21 November 2014 23:29, Brett Cannon  wrote:
> > On Fri Nov 21 2014 at 7:37:13 AM Nick Coghlan 
> wrote:
> >> If that "must be self-hosted" constraint is removed, then the obvious
> >> candidate for Mercurial hosting that supports online editing + pull
> >> requests is the PSF's BitBucket account.
> >
> > There's also CodePlex and (ironically) SourceForge for open-source hg
> > hosting.
>
> Did SF end up actually integrating Hg hosting properly? They hadn't
> the last time I looked - it was still a third party addon to Allura.
>
> I'll spell this out in the PEP, but the reason I suggest BitBucket in
> particular is:
>
> - it's written in Python
> - the PSF already has an organisational account set up there
> - I have admin access, so I can bootstrap other folks as
> administrators (Christian Heimes & Brian Curtin are also admins)
> - I know the online editing works reliably, since I maintain the PyPI
> metadata PEP drafts there
> - having used both it and GitHub extensively, I'm confident the
> workflows are similar enough that anyone familiar with GitHub will be
> able to easily pick up the BitBucket UI
>

You're putting more thought into the response than I did in the suggestion.
=) I just know they claim to host hg repos.


>
> As far as ignoring PR noise goes, we can still request that folks
> squash any commits (keep in mind that the proposal is only to move
> pure documentation repos, so long complex PR chains seem unlikely).
>

Well, requesting that and actually getting it are two different things,
especially when I don't know of any way to rewrite a commit message after
the fact if we go back to someone and say "your commit message is bad,
please fix it".
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-22 Thread Brett Cannon
On Sat Nov 22 2014 at 10:00:03 AM Nick Coghlan  wrote:

>
> On 22 Nov 2014 07:37, "Donald Stufft"  wrote:
> > > On Nov 21, 2014, at 3:59 PM, Ned Deily  wrote:
>
> > > Sure, I get that.  But we're not even talking here about the main
> Python
> > > docs since they are part of the CPython repos, only ancillary repos
> like
> > > PEPs and the developer's guide.  The level of activity on those is
> quite
> > > small.  So, thinking about it a bit more, PEPs don't normally have bug
> > > tracker issues associated with them so I suppose my concerns about
> issue
> > > tracker aren't a major concern for them.  The dev guide does get issues
> > > opened about it and I suppose they could be managed.  But, without
> > > tackling the CPython repo workflow (a *much* bigger deal), is the
> > > divergence in workflows worth it?  I dunno.
>
> I also think the tutorial and howto guides should be broken out of the
> main CPython repo &  made version independent (with internal version
> specific notes).
>
> That offers no compelling advantages right now, but becomes far more
> beneficial if it comes with a switch to enabling online editing.
>
> > Yea for the smaller repositories I don’t have a whole lot of opinion
> > about if the benefit buys us much, especially since one of the goals
> > is new-person friendliness but the problem is that it doesn’t translate
> > to contributing to CPython itself.
>
> OK, different question. Has anyone here actually even *read* the workflow
> PEPs I wrote? They were on the agenda for the language summit, but got
> bumped due to lack of time (which I'm still annoyed about, given the
> comparatively inconsequential things that chewed up a whole lot of the day).
>

I did and was looking forward to them coming to fruition.


> I've only had a couple of folks from outside the core dev community
> express interest in them. Personally, the lack of online editing support
> annoys me immensely whenever I need to work on PEPs or the devguide. I also
> think it's ridiculous that we have dozens (hundreds?) of folks running
> community workshops, and all creating their own custom documentation,
> rather than us finding a way to better enable their collaboration on the
> official tutorial.
>
> The BitBucket proposal in this thread came out of a desire to avoid adding
> yet more work to an understaffed group of primarily volunteers maintaining
> the infrastructure (the paid admins are more focused on incident response
> and general infrastructure support, rather than spinning up new workflow
> services).
>
> My preferred answer remains setting up a srlf-hosted forge.python.org,
> but I've seen little evidence we have the capacity to deploy & maintain
> such a service effectively, given the relative lack of interest shown in
> the idea by almost everyone I've spoken to about it. Any progress has only
> come with a lot of pushing from me, and I just don't have the personal
> bandwidth to sustain that at this point. That's why the related PEPs were
> deferred, and the only responses I've received regarding potentially taking
> them over have come from folks outside the core development community,
> which really doesn't help very much in removing my availability as a
> bottleneck in the workflow improvement process.
>
> If nobody wants to maintain a self-hosted forge, or even enable the folks
> that have expressed interest in setting it up & maintaining it, then the
> right answer is "don't do it" - we should use a commercial service instead.
>
There are two goals to any improvement to the development workflow: that
which helps the core devs and that which helps everyone else. For helping
core devs that's getting some CI set up which will test every patch
submitted, single-click patch committal from the issue tracker, etc. For
everyone else it's inline editing and whatever it takes to get patches
accepted faster (I know Nick is pointing out he wants inline editing for
PEPs and docs but I don't view that as critical for core devs who already
have the checkouts available and have the workflow memorized).

>From my perspective, getting our commit workflow improved is the critical
first step before we worry about making it easier to receive patches. If we
can't keep up with an influx of patches that might occur from inline
editing then there is little point in having it; frustrating people that we
can't commit patches as fast as we receive them is not helpful.

Now in terms of how the heck we are ever going to improve our workflow,
that's tricky. As Nick as pointed out we are low on volunteer time. Take
the issue tracker as an example: Ezio Melotti does a large amount of work
and R. David Murray also helps, but that's mostly it (Martin von Löwis has
helped in the past but has been mostly absent as of late). We are not well
covered in the "hit by a bus" scenario.

I understand the viewpoint of not wanting to give up control of our process
to a third party, and I understand not wanting to use closed-source
software

Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 6:18:46 AM Nick Coghlan  wrote:

>
> On 23 Nov 2014 18:11, "Donald Stufft"  wrote:
> > > On Nov 23, 2014, at 2:35 AM, Nick Coghlan  wrote:
> > >
>
> > > In the absence of a proposal to change version control systems
> > > (again), the lack of Mercurial hosting on GitHub makes it rather a
> > > moot point. Given that we can barely muster up any enthusiasm for
> > > rehosting *without* changing version control systems (and the number
> > > of CI systems that integrate with hg.python.org repos other than the
> > > main CPython one is exactly zero), any proposal that involves doing
> > > even *more* work seems doubly doomed.
> > >
> >
> > I’d volunteer to do the work to get the PEPs, and possibly other
> repositories
> > onto Github if we so decided to do so. Don’t let the lack of volunteer
> stop
> > that because I will find the time to do it if need be.
>
> It's the other way around: someone would have to volunteer to make the
> case that switching version control systems will actually help in any way
> whatsoever with the core reviewer bandwidth problem.
>
> We do *not* have a shortage of people wanting to contribute. While ongoing
> outreach efforts are essential to promote increased diversity in the
> contributor base and to counter natural attrition, there is currently no
> major problem that needs solving on that front. CPython is high profile
> enough that folks are willing to do battle with the current complicated
> contribution process, so we're already one of the most active open source
> projects in the world, in *spite* of the problems with the existing
> workflow.
>

The *immediate* problem is making it easier to accept contributions from
people. The long-term, never-ending problem is making the whole process of
submitting a patch and getting it accepted as easy as possible for everyone
involved, contributor and committer alike. If the goal is to make it so we
can accept PRs for easier patch acceptances then that can be accomplished
on either Bitbucket or GitHub. But if we want to make it easier for people
to make contributions then GitHub is arguably better than Bitbucket,
whether it's through familiarity of GitHub for most people thanks to other
FLOSS projects or from the superior tooling around GitHub (both the
platform itself and the ecosystem that has sprung up around it).


>  This high level of activity also takes place in spite of the fact that
> direct corporate investment in paid contributions to the CPython runtime
> currently don't really reflect the key role that CPython holds in the
> enterprise Linux, OpenStack, data analysis, and education ecosystems (to
> name a few where I personally have some level of involvement either
> personally or through work).
>
> Where I believe we *do* have a problem is with failing to make the best
> possible use of core developer contribution time, as typos and other minor
> fixes to project (rather than product) documentation are managed through
> the same offline patch & upload process as the reference interpreter
> itself. (There are other issues as well, but they're out of scope for the
> current discussion, which is about the support repos, rather than CPython -
> the same problem exists there, but the solution is unlikely to be as
> straightforward).
>
> Patches getting held up in the review queue for weeks or months is a
> *huge* barrier to contribution, as it prevents the formation of the
> positive feedback cycle where having a contribution accepted feels good, so
> folks are more likely to want to contribute again.
>
> In that context, the useful features that a richer repo hosting
> application can potentially offer are single-click acceptance and merging
> of documentation changes that aren't directly linked to a specific CPython
> version, as well as the ability to make trivial fixes to that documentation
> (like fixing a typo) entirely online.
>
> Those features are readily accessible without changing the underlying
> version control system (whether self-hosted through Kallithea or externally
> hosted through BitBucket or RhodeCode). Thus the folks that want to change
> the version control system need to make the case that doing so will provide
> additional benefits that *can't* be obtained in a less disruptive way.
>

I guess my question is who and what is going to be disrupted if we go with
Guido's suggestion of switching to GitHub for code hosting? Contributors
won't be disrupted at all since most people are more familiar with GitHub
vs. Bitbucket (how many times have we all heard the fact someone has even
learned Mercurial just to contribute to Python?). Core developers might be
based on some learned workflow, but I'm willing to bet we all know git at
this point (and for those of us who still don't like it, myself included,
there are GUI apps to paper over it or hg-git for those that prefer a CLI).
Our infrastructure will need to be updated, but how much of it is that
hg-specific short of the command to checkou

Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 1:06:18 PM Ethan Furman  wrote:

> On 11/23/2014 08:55 AM, Brett Cannon wrote:
> >
> > Sure, but I would never compare our infrastructure needs to Red Hat. =)
> You
> > also have to be conservative in order to minimize downtown and impact for
> > cost reasons. As an open source project we don't have those kinds of
> worry;
> > we just have to worry about keeping everyone happy.
>
> Minimizing downtime and impact is important for us, too.  The Python job
> board has now been down for nine months --
> that's hardly good PR.
>

That has nothing to do with downtime and everything to do with volunteer
time. My point about "downtime" is that if I can't commit to the cpython
repo for a day it isn't going to cause me to freak out or cost anyone
thousands of dollars or more in revenue.

-Brett


>
> --
> ~Ethan~
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 1:08:58 PM Ethan Furman  wrote:

> On 11/23/2014 08:55 AM, Brett Cannon wrote:
> >
> > Fourth, do any core developers feel strongly about not using GitHub?
>
> Dous GitHub support hg?  If not, I am strongly opposed.
>

Depends on what you mean by "support". If you mean natively, then no. If
you mean "I want more of a hg CLI" then you can get that with
http://hg-git.github.io/ .

And can I just say this is all bringing back "wonderful" flashbacks of the
SourceForge to our own infrastructure move as well as the svn to hg move. =/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 11:56:49 AM Guido van Rossum  wrote:

> On Sat, Nov 22, 2014 at 10:49 PM, Nick Coghlan  wrote:
>
>> More generally, I'm very, very disappointed to see folks so willing to
>> abandon fellow community members for the sake of following the crowd.
>> Perhaps we should all just abandon Python and learn Ruby or JavaScript
>> because they're better at getting press in Silicon Valley?
>
>
> That's a really low blow, Nick.
>
> I think these are the facts:
>
> - Hg/Git are equivalent in functionality (at least to the extent that the
> difference can't be used to force a decision), and ditto for
> BitBucket/GitHub, with one crucial exception (see below)
>
> - We're currently using Hg for most projects under the PSF umbrella
> (however, there's https://github.com/python/pythondotorg)
>
> - Moving from Hg to Git is a fair amount of one-time work (converting
> repos) and is inconvenient to core devs who aren't already used to Git
> (learning a new workflow)
>
> - Most newer third-party projects are already on GitHub
>
> - GitHub is way more popular than BitBucket and slated for long-term
> success
>
> But here's the kicker for me: **A DVCS repo is a social network, so it
> matters in a functional way what everyone else is using.**
>
> So I give you that if you want a quick move into the modern world, while
> keeping the older generation of core devs happy (not counting myself :-),
> BitBucket has the lowest cost of entry. But I strongly believe that if we
> want to do the right thing for the long term, we should switch to GitHub. I
> promise you that once the pain of the switch is over you will feel much
> better about it. I am also convinced that we'll get more contributions this
> way.
>
> Note: I am not (yet) proposing we switch CPython itself. Switching it
> would be a lot of work, and it is specifically out of scope for this
> discussion.
>

If we want to test the complexity of moving something to GitHub then
probably the best repo to use is the peps one:

   - Very few people directly use that repo (you and me alone could
   probably manage it if we enforced all changes through a PR as I could then
   do approvals from work instead of having to wait until I was at home with
   an hg checkout available)
   - It's used on the website so it would require updating infrastructure
   - It isn't a lot of overhead to tell people who email the peps mailing
   list to "please send a pull request through GitHub" since it isn't tracked
   in the issue tracker anyway
   - There is a benefit of setting up some CI integration to know when a PR
   is actually incorrectly formatted

And if people want to test the impact of Bitbucket we could do it for
something like the HOWTOs as that too involves infrastructure but is not
used by a lot of people. In fact we can make it known we are piloting this
approach on Bitbucket and see what kind of contributions it triggers (ditto
for the peps since I'm sure some people will want to send in typo PRs and
such).

IOW I don't see why we can't pilot this between now and April for the
language summit and see what difference it all makes so we can have an
informed discussion in Montreal with more than 4 full months of experience
under our belts. Then we can discuss Bitbucket vs. GitHub, docs vs.
everything moving vs. nothing, etc. That was this stops all being
conjecture and more about seeing if there is an actual impact.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 1:31:36 PM Brett Cannon  wrote:

> On Sun Nov 23 2014 at 11:56:49 AM Guido van Rossum 
> wrote:
>
>> On Sat, Nov 22, 2014 at 10:49 PM, Nick Coghlan 
>> wrote:
>>
>>> More generally, I'm very, very disappointed to see folks so willing to
>>> abandon fellow community members for the sake of following the crowd.
>>> Perhaps we should all just abandon Python and learn Ruby or JavaScript
>>> because they're better at getting press in Silicon Valley?
>>
>>
>> That's a really low blow, Nick.
>>
>> I think these are the facts:
>>
>> - Hg/Git are equivalent in functionality (at least to the extent that the
>> difference can't be used to force a decision), and ditto for
>> BitBucket/GitHub, with one crucial exception (see below)
>>
>> - We're currently using Hg for most projects under the PSF umbrella
>> (however, there's https://github.com/python/pythondotorg)
>>
>> - Moving from Hg to Git is a fair amount of one-time work (converting
>> repos) and is inconvenient to core devs who aren't already used to Git
>> (learning a new workflow)
>>
>> - Most newer third-party projects are already on GitHub
>>
>> - GitHub is way more popular than BitBucket and slated for long-term
>> success
>>
>> But here's the kicker for me: **A DVCS repo is a social network, so it
>> matters in a functional way what everyone else is using.**
>>
>> So I give you that if you want a quick move into the modern world, while
>> keeping the older generation of core devs happy (not counting myself :-),
>> BitBucket has the lowest cost of entry. But I strongly believe that if we
>> want to do the right thing for the long term, we should switch to GitHub. I
>> promise you that once the pain of the switch is over you will feel much
>> better about it. I am also convinced that we'll get more contributions this
>> way.
>>
>> Note: I am not (yet) proposing we switch CPython itself. Switching it
>> would be a lot of work, and it is specifically out of scope for this
>> discussion.
>>
>
> If we want to test the complexity of moving something to GitHub then
> probably the best repo to use is the peps one:
>
>- Very few people directly use that repo (you and me alone could
>probably manage it if we enforced all changes through a PR as I could then
>do approvals from work instead of having to wait until I was at home with
>an hg checkout available)
>- It's used on the website so it would require updating infrastructure
>- It isn't a lot of overhead to tell people who email the peps mailing
>list to "please send a pull request through GitHub" since it isn't tracked
>in the issue tracker anyway
>- There is a benefit of setting up some CI integration to know when a
>PR is actually incorrectly formatted
>
> And if people want to test the impact of Bitbucket we could do it for
> something like the HOWTOs as that too involves infrastructure but is not
> used by a lot of people. In fact we can make it known we are piloting this
> approach on Bitbucket and see what kind of contributions it triggers (ditto
> for the peps since I'm sure some people will want to send in typo PRs and
> such).
>

Actually the tutorial might be best to measure ease of contribution for
people on Bitbucket since we can also ask people who use the tutorial to
test out pointing people to the Bitbucket repo if people want to send a PR.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 3:04:05 PM Georg Brandl  wrote:

> On 11/23/2014 05:55 PM, Brett Cannon wrote:
>
> > I guess my question is who and what is going to be disrupted if we go
> with
> > Guido's suggestion of switching to GitHub for code hosting? Contributors
> won't
> > be disrupted at all since most people are more familiar with GitHub vs.
> > Bitbucket (how many times have we all heard the fact someone has even
> learned
> > Mercurial just to contribute to Python?). Core developers might be based
> on some
> > learned workflow, but I'm willing to bet we all know git at this point
> (and for
> > those of us who still don't like it, myself included, there are GUI apps
> to
> > paper over it or hg-git for those that prefer a CLI). Our infrastructure
> will
> > need to be updated, but how much of it is that hg-specific short of the
> command
> > to checkout out the repo? Obviously Bitbucket is much more minor by
> simply
> > updating just a URL, but changing `hg clone` to `git clone` isn't crazy
> either.
> > Georg, Antoine, or Benjamin can point out if I'm wrong on this, maybe
> Donald or
> > someone in the infrastructure committee.
>
> Well, since "most people" already know git this part is probably not a big
> deal,
> although we'd have to consider alienating core developers who are not
> git-savvy.
>
> > Probably the biggest thing I can think of that would need updating is
> our commit
> > hooks. Once again Georg, Antoine, or Benjamin could say how difficult it
> would
> > be to update those hooks.
>
> There are two categories of hooks we use: hooks that run before a push is
> accepted, and hooks that run afterwards.  The latter are not a concern,
> since
> anything that GH/BB doesn't support can be run as a web service on
> python.org
> infrastructure, which then gets POST requests from the hosting platforms.
> These
> are
>
> * tracker update
> * IRC notification
> * sending email to python-checkins
> * triggering buildbot
>
> The more problematic category are pre-push hooks.  We use them for checking
> and rejecting commits with
>
> * disallowed branches
> * non-conformant whitespace
> * wrong EOL style
> * multiple heads per named branch
>
> As far as I know, neither GH nor BB support such hooks, since they are
> highly
> project-specific.  However, they are only used in cpython and related
> repositories, so that doesn't concern migration of doc-only repos.
>
> Sure, you can let the CI run the checks, but that doesn't prohibit merging
> and is circumvented by direct pushes to the repository that don't go
> through
> the PR system.  (And please don't make me as a coredev open a PR for every
> change.)
>

I'm not even going to touch the idea of requiring code review for all
patches in the middle of this discussion. =)


> > From my perspective, swapping out Mercurial for git achieves exactly
> nothing
> > in terms of alleviating the review bottleneck (since the core
> developers
> > that strongly prefer the git UI will already be using an adapter),
> and is in
> > fact likely to make it worse by putting the greatest burden in
> adapting to
> > the change on the folks that are already under the greatest time
> pressure.
> >
> >
> > That's not entirely true. If you are pushing a PR shift in our patch
> acceptance
> > workflow then Bitbucket vs. GitHub isn't fundamentally any different in
> terms of
> > benefit, and I would honestly argue that GitHub's PR experience is
> better. IOW
> > either platform is of equal benefit.
>
> In my opinion, scattering repos over github, bitbucket and hg.python.org
> is
> even less friendly to contributors than a centralized place.  (We are
> already
> approaching this, with pydotorg and infrastructure repos on github.)  So
> I'm
> going to add some thoughts here about migrating the main CPython to
> git+hub.
>

I don't think we would ever split ourselves across three separate hosting
services. At most I see two -- one for docs, one for CPython -- but I would
honestly expect it to be only one long-term.


>
> We have to consider how well our branch workflow works with the PR
> workflow.  There's no gain in the ability to easily merge a PR to one
> branch
> via github when the subsequent merge of 3.x to default/master requires a
> local
> pull/push cycle, as well as the 2.x backport.
>
> As far as I know, you'd have to open a pull/merge request yourself and
> instantly
> merge it, except if there are conflicts between branches, in whic

Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread Brett Cannon
On Sun Nov 23 2014 at 4:18:37 PM Georg Brandl  wrote:

> On 11/23/2014 09:42 PM, Brett Cannon wrote:
>

[SNIP]

> > And I'm still in support no matter what of breaking out the HOWTOs
> and the
> > > tutorial into their own repos for easier updating (having to
> update the Python
> > > porting HOWTO in three branches is a pain when it should be
> consistent across
> > > Python releases).
> >
> > I see no problem with that, provided there's a cronjob that syncs
> the version
> > in Doc/ to the external version reasonably often.
> >
> >
> > Would that really be necessary? At least for the HOWTOs how often are
> they
> > edited simultaneously as some code in CPython? The Clinic HOWTO is
> probably the
> > only one that might get updated simultaneously. I'd also be curious to
> know how
> > often the tutorial is updated simultaneously as well.
>
> I'd like the HOWTOs to stay part of Doc/, so changes in the external repo
> have
> to be merged in there somehow, and not only at release time.
>

Right, I'm trying to understand *why* you want the HOWTOs to stay in Doc/.
I dread having to update the porting HOWTO because it requires updating
2.7, 3.4, and default. And if the process is automated to pull from an
external repo then what is the benefit of syncing the files and duplicating
them across 4 repos?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-24 Thread Brett Cannon
On Mon Nov 24 2014 at 2:25:30 AM Nick Coghlan  wrote:

> On 24 November 2014 at 02:55, Brett Cannon  wrote:
> > On Sun Nov 23 2014 at 6:18:46 AM Nick Coghlan 
> wrote:
> >> Those features are readily accessible without changing the underlying
> >> version control system (whether self-hosted through Kallithea or
> externally
> >> hosted through BitBucket or RhodeCode). Thus the folks that want to
> change
> >> the version control system need to make the case that doing so will
> provide
> >> additional benefits that *can't* be obtained in a less disruptive way.
> >
> > I guess my question is who and what is going to be disrupted if we go
> with
> > Guido's suggestion of switching to GitHub for code hosting? Contributors
> > won't be disrupted at all since most people are more familiar with GitHub
> > vs. Bitbucket (how many times have we all heard the fact someone has even
> > learned Mercurial just to contribute to Python?). Core developers might
> be
> > based on some learned workflow, but I'm willing to bet we all know git at
> > this point (and for those of us who still don't like it, myself included,
> > there are GUI apps to paper over it or hg-git for those that prefer a
> CLI).
> > Our infrastructure will need to be updated, but how much of it is that
> > hg-specific short of the command to checkout out the repo? Obviously
> > Bitbucket is much more minor by simply updating just a URL, but changing
> `hg
> > clone` to `git clone` isn't crazy either. Georg, Antoine, or Benjamin can
> > point out if I'm wrong on this, maybe Donald or someone in the
> > infrastructure committee.
>
> Are you volunteering to write a competing PEP for a migration to git and
> GitHub?
>

Been there, done that, got the PEP number. I'm just trying to speak from
the perspective of the person who drove us off of svn and on to hg (as well
as drove us off of SourceForge to our own workflow stack). I personally
just want a better workflow. As I said at the beginning, I read your PEPs
and talked to you about them at PyCon and I want something like that to
happen; push button patch acceptance along with CI of patches would go a
long way to making accepting patches easier. But as others have pointed
out, we just don't have the volunteer time to make them happen ATM, so I'm
willing to entertain moving to GitHub or Bitbucket or whatever to improve
our situation.


>
> I won't be updating PEP 474 to recommend moving to either, as I don't
> think that would be a good outcome for the Python ecosystem as a
> whole. It massively undercuts any possible confidence anyone else
> might have in Mercurial, BitBucket, Rhodecode, Kallithea & Allura (all
> Python based version control, or version control hosting, systems). If
> we as the Python core development team don't think any of those are
> good enough to meet the modest version control needs of our support
> repos, why on earth would anyone else choose them?
>
> In reality, I think most of these services are pretty interchangeable
> - GitHub's just been the most effective at the venture capital powered
> mindshare grab business model (note how many of the arguments here
> stem from the fact folks like *other* things that only interoperate
> with GitHub, and no other repository hosting providers - that's the
> core of the A18z funded approach to breaking the "D" in DVCS and
> ensuring that GitHub's investors are in a position to clip the ticket
> when GitHub eventually turns around and takes advantage of its
> dominant market position to increase profit margins).
>
> That's why I consider it legitimate to treat supporting fellow Python
> community members as the determining factor - a number of the
> available options meet the "good enough" bar from a technical
> perspective, so it's reasonable to take other commercial and community
> factors into account when making a final decision.
>
> > Probably the biggest thing I can think of that would need updating is our
> > commit hooks. Once again Georg, Antoine, or Benjamin could say how
> difficult
> > it would be to update those hooks.
>
> If CPython eventually followed suit in migrating to git (as seems
> inevitable if all the other repos were to switch), then every buildbot
> will also need to be updated to have git installed (and Mercurial
> removed).
>
> >> From my perspective, swapping out Mercurial for git achieves exactly
> >> nothing in terms of alleviating the review bottleneck (since the core
> >> developers that strongly prefer the git UI will already be using an
> >> adapter), and is in fa

Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-25 Thread Brett Cannon
On Tue Nov 25 2014 at 1:17:49 AM Nick Coghlan  wrote:

> On 25 November 2014 at 13:18, Donald Stufft  wrote:
> >
> > There’s also the social aspects of it as well which is a big concern too
> IMO. If you want to attract new contributors, not just keep the ones you
> already have sometimes that means going to where the new contributors are
> instead of telling them that they need to come to you.
>
> Again, the bottleneck at the moment is *reviewing* contributions, not
> getting more of them. The two aspects are not unrelated, but my key
> concern at this point is to make the patch review and acceptance
> process easier, moreso than to increase the rate of incoming patches.
>
> My short term proposal to consider BitBucket as an option for support
> repo hosting purposes was mostly driven by my delays in getting the
> end-to-end signing PEPs for PyPI updated in a timely fashion - that
> would have been much easier if the authors had been able to submit
> pull requests, and I just reviewed and accepted them.
>

And then people thought, "ooh, if we are going to open that can of worms we
might as well get the better network effect of GitHub" along with Guido
going "git >= hg".


>
> The subsequent discussion has made me realise that dissatisfaction
> with the current state of the infrastructure amongst core developers
> is higher than I previously realised, so I've re-evaluated my own
> priorities, and will be spending more time on both PEP 474
> (forge.python.org) and PEP 462 (the far more complex proposal to
> consider introducing OpenStack style merge gating for CPython).
>

Yay!


>
> At present, it looks like significant workflow improvements for the
> main CPython repos will require custom tooling - there's very little
> out there that will adequately support a long term maintenance branch,
> a short term maintenance branch, additional security fix only
> branches, and a separate main line of development.
>

Yes, we are unfortunately special.


>
> Having our own Kallithea installation would provide additional
> implementation options on that front, so I'll be keeping that in mind
> as I work to get the proof-of-concept forge instance online.
>

I think this is a reasonable summary of what came up. Short of Donald and
maybe Guido really liking the GitHub idea because of their reach, most of
us just want better tooling and we all have various compromises we are
willing to make to gain that tooling. I suspect if we make sure we add
Bitbucket and GitHub login support to the issue tracker then that would
help go a fair distance to helping with the GitHub pull of reach (and if we
make it so people can simply paste in their fork's URL into the issue
tracker and we simply grab a new patch for review that would go even
farther).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-30 Thread Brett Cannon
 On Sat, Nov 29, 2014, 21:55 Guido van Rossum  wrote:

All the use cases seem to be about adding some kind of getattr hook to
modules. They all seem to involve modifying the CPython C code anyway. So
why not tackle that problem head-on and modify module_getattro() to look
for a global named __getattr__ and if it exists, call that instead of
raising AttributeError?

 Not sure if anyone thought of it. :) Seems like a reasonable solution to
me. Be curious to know what the benchmark suite said the impact was.

-brett


On Sat, Nov 29, 2014 at 11:37 AM, Nathaniel Smith  wrote:

On Sat, Nov 29, 2014 at 4:21 AM, Guido van Rossum  wrote:
> Are these really all our options? All of them sound like hacks, none of
them
> sound like anything the language (or even the CPython implementation)
should
> sanction. Have I missed the discussion where the use cases and constraints
> were analyzed and all other approaches were rejected? (I might have some
> half-baked ideas, but I feel I should read up on the past discussion
first,
> and they are probably more fit for python-ideas than for python-dev. Plus
> I'm just writing this email because I'm procrastinating on the type
hinting
> PEP. :-)

The previous discussions I was referring to are here:
  http://thread.gmane.org/gmane.comp.python.ideas/29487/focus=29555
  http://thread.gmane.org/gmane.comp.python.ideas/29788

There might well be other options; these are just the best ones I
could think of :-). The constraints are pretty tight, though:
- The "new module" object (whatever it is) should have a __dict__ that
aliases the original module globals(). I can elaborate on this if my
original email wasn't enough, but hopefully it's obvious that making
two copies of the same namespace and then trying to keep them in sync
at the very least smells bad :-).
- The "new module" object has to be a subtype of ModuleType, b/c there
are lots of places that do isinstance(x, ModuleType) checks (notably
-- but not only -- reload()). Since a major goal here is to make it
possible to do cleaner deprecations, it would be really unfortunate if
switching an existing package to use the metamodule support itself
broke things :-).
- Lookups in the normal case should have no additional performance
overhead, because module lookups are extremely extremely common. (So
this rules out dict proxies and tricks like that -- we really need
'new_module.__dict__ is globals()' to be true.)

AFAICT there are three logically possible strategies for satisfying
that first constraint:
(a) convert the original module object into the type we want, in-place
(b) create a new module object that acts like the original module object
(c) somehow arrange for our special type to be used from the start

My options 1 and 2 are means of accomplishing (a), and my options 3
and 4 are means of accomplishing (b) while working around the
behavioural quirks of module objects (as required by the second
constraint).

The python-ideas thread did also consider several methods of
implementing strategy (c), but they're messy enough that I left them
out here. The problem is that somehow we have to execute code to
create the new subtype *before* we have an entry in sys.modules for
the package that contains the code for the subtype. So one option
would be to add a new rule, that if a file pkgname/__new__.py exists,
then this is executed first and is required to set up
sys.modules["pkgname"] before we exec pkgname/__init__.py. So
pkgname/__new__.py might look like:

import sys
from pkgname._metamodule import MyModuleSubtype
sys.modules[__name__] = MyModuleSubtype(__name__, docstring)

This runs into a lot of problems though. To start with, the 'from
pkgname._metamodule ...' line is an infinite loop, b/c this is the
code used to create sys.modules["pkgname"]. It's not clear where the
globals dict for executing __new__.py comes from (who defines
__name__? Currently that's done by ModuleType.__init__). It only works
for packages, not modules. The need to provide the docstring here,
before __init__.py is even read, is weird. It adds extra stat() calls
to every package lookup. And, the biggest showstopper IMHO: AFAICT
it's impossible to write a polyfill to support this code on old python
versions, so it's useless to any package which needs to keep
compatibility with 2.7 (or even 3.4). Sure, you can backport the whole
import system like importlib2, but telling everyone that they need to
replace every 'import numpy' with 'import importlib2; import numpy' is
a total non-starter.

So, yeah, those 4 options are really the only plausible ones I know of.

Option 1 and option 3 are pretty nice at the language level! Most
Python objects allow assignment to __class__ and __dict__, and both
PyPy and Jython at least do support __class__ assignment. Really the
only downside with Option 1 is that actually implementing it requires
attention from someone with deep knowledge of typeobject.c.

-n

--
Nathaniel J. Smith
Postdoctoral researcher - Informatics

Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-11-30 Thread Brett Cannon
On Sat Nov 29 2014 at 7:16:34 PM Alex Gaynor  wrote:

> Donald Stufft  stufft.io> writes:
>
> >
> > [words words words]
> >
>
> I strongly support this PEP. I'd like to share two pieces of information.
> Both
> of these are personal anecdotes:
>
> For the past several years, I've been a contributor on two major projects
> using
> mercurial, CPython and PyPy. PyPy has a strong culture of in-repo
> branching,
> basically all contributors regularly make branches in the main repo for
> their
> work, and we're very free in giving people commit rights, so almost
> everyone
> working on PyPy in any way has this level of access. This workflow works
> ok. I
> don't love it as much as git, but it's fine, it's not an impediment to my
> work.
>
> By contrast, CPython does not have this type of workflow, there are almost
> no
> in-tree branches besides the 2.7, 3.4, etc. ones. Despite being a regular
> hg
> user for years, I have no idea how to create a local-only branch, or a
> branch
> which is pushed to a remote (to use the git term). I also don't generally
> commit my own work to CPython, even though I have push privledges,
> because I
> prefer to *always* get code review on my work. As a result, I use a git
> mirror
> of CPython to do all my work, and generate patches from that.
>
> The conclusion I draw from this is that hg's workflow is probably fine if
> you're a committer on the project, or don't ever need to maintain multiple
> patches concurrently (and thus can just leave everything uncommitted in the
> repo). However, the hg workflow seems extremely defficient at non-committer
> contributors.
>

One way to come close to that using hg is to have your own clone that you
never push to hg.python.org/cpython (e.g. cloning the Bitbucket clone of
cpython or hosting on hg.python.org a personal clone). You can then specify
the repo and branch on the issue tracker to generate your patch:
https://docs.python.org/devguide/triaging.html#mercurial-repository . After
that it's just like any other patch workflow for core devs. It's not quite
as nice as maybe using named branches where you can just do a final hg
merge/commit to get your changes committed, but if you're not going to
commit your branches then you might as well get the automatic patch
generation perk in the issue tracker rather than using git (unless there is
some other git feature you use that you can't get in hg).


>
> The seconds experience I have is that of Django's migration to git and
> github.
> For a long time we were on SVN, and we were very resistant to moving to
> DVCS in
> general, and github in particular. Multiple times I said that I didn't see
> how
> exporting a patch and uploading it to trac was more difficult than sending
> a
> pull request. That was very wrong on my part.
>
> My primary observation is not about new contributors though, it's actually
> about the behavior of core developers. Before we were on github, it was
> fairly
> rare for core developers to ask for reviews for anything besides *gigantic*
> patches, we'd mostly just commit stuff to trunk. Since the switch to
> github,
> I've seen that core developers are *far* more likely to ask for reviews of
> their work before merging.
>

Why specifically? Did you have a web UI for reviewing patches previously?
Do you have CI set up for patches now and didn't before? What features did
you specifically gain from the switch to GitHub that you didn't have
before? IOW was it the "magic" of GitHub or some technical solution that
you got as part of the GitHub package and thus could theoretically be
replicated on python.org?

-Brett
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 10:55:26 AM Ian Cordasco 
wrote:

> On Sun, Nov 30, 2014 at 7:01 AM, Antoine Pitrou 
> wrote:
> > On Sun, 30 Nov 2014 16:23:08 +1100
> > Chris Angelico  wrote:
> >>
> >> Yes, GitHub is proprietary. But all of your actual code is stored in
> >> git, which is free, and it's easy to push that to a new host somewhere
> >> else, or create your own host. This proposal is for repositories that
> >> don't need much in the way of issue trackers etc, so shifting away
> >> from GitHub shouldn't demand anything beyond moving the repos
> >> themselves.
> >
> > I hope we're not proposing to move the issue trackers to github,
> > otherwise I'm -1 on this PEP.
> >
> > Regards
> >
> > Antoine.
>
> So I usually choose not to weigh in on discussions like these but
> there seems to be a lot of misdirection in some of these arguments.
>
> To start, I'm generally neutral about this proposal or Nick's proposal
> that spurred this one. I've found the most frustrating part of
> contributing to anything involving CPython to be the lack of reviewer
> time. I have had very small (2-line) patches take months (close to a
> year in reality) to get through in spite of periodically pinging the
> appropriate people. Moving to git/GitHub will not alleviate this at
> all.
>
> To be clear, the main reasoning behind Nick's was being able to submit
> changes without ever having to have a local copy of the repository in
> question on your machine. Having a complete web workflow for editing
> and contributing makes the barrier to entry far lower than switching
> VCS or anything else. BitBucket (apparently, although I've never used
> this) and GitHub both have this capability and *both* are
> free-as-in-beer systems.
>
> No one as I understand it is proposing that we use the per-distro
> proprietary interface to these websites.
>
> All data can be removed from GitHub using it's API and can generally
> be converted to another platform. The same goes for BitBucket although
> it's arguably easier to retrieve issue data from BitBucket than
> GitHub. That said, *the issue tracker is not covered by these
> proposals* so this is a moot point. Drop it already.
>
> If we're seriously considering moving to git as a DVCS, we should
> consider the real free-as-in-freedom alternatives that come very close
> to GitHub and can be improved by us (even though they're not written
> in Python). One of those is GitLab. We can self-host a GitLab instance
> easily or we can rely on gitlab.com. GitLab aims to provide a very
> similar user experience to GitHub and it's slowly approaching feature
> parity and experience parity. GitLab is also what a lot of people
> chose to switch to after the incident Steven mentioned, which I don't
> think is something we should dismiss or ignore.
>
> We should refocus the discussion with the following in mind:
>
> - Migrating "data" from GitHub is easy. There are free-as-in-freedom
> tools to do it and the only cost is the time it would take to monitor
> the process
>
> - GitHub has a toxic company culture that we should be aware of before
> moving to it. They have a couple blog posts about attempting to change
> it but employees became eerily silent after the incident and have
> remained so from what I've personally seen.
>
> - GitHub may be popular but there are popular FOSS solutions that
> exist that we can also self-host at something like forge.python.org
>
> - bugs.python.org is not covered by any of these proposals
>
> - The main benefit this proposal (and the proposal to move to
> BitBucket) are seeking to achieve is an online editing experience
> allowing for *anyone with a browser and an account* to contribute.
> This to me is the only reason I would be +1 for either of these
> proposals (if we can reach consensus).
>

But that's not just it. As you pointed out, Ian, getting patch submissions
committed faster would be a huge improvement over what we have today.
GitHub/Bitbucket/whatever could help with this by giving core devs basic CI
to know that I patch is sound to some extent as well as push button commits
of patches.

For me personally, if I knew a simple patch integrated cleanly and passed
on at least one buildbot -- when it wasn't a platform-specific fix -- then
I could easily push a "Commit" button and be done with it (although this
assumes single branch committing; doing this across branches makes all of
this difficult unless we finally resolve our Misc/NEWS conflict issues so
that in some instances it can be automated). Instead I have to wait until I
have a clone I can push from, download a patch, apply it, run the unit
tests myself, do the commit, and then repeat a subset of that to whatever
branches make sense. It's a lot of work for which some things could be
automated.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail

Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 12:00:20 PM Donald Stufft  wrote:

>
> On Nov 30, 2014, at 11:44 AM, Brett Cannon  wrote:
>
>
>
> On Sun Nov 30 2014 at 10:55:26 AM Ian Cordasco 
> wrote:
>
>> On Sun, Nov 30, 2014 at 7:01 AM, Antoine Pitrou 
>> wrote:
>> > On Sun, 30 Nov 2014 16:23:08 +1100
>> > Chris Angelico  wrote:
>> >>
>> >> Yes, GitHub is proprietary. But all of your actual code is stored in
>> >> git, which is free, and it's easy to push that to a new host somewhere
>> >> else, or create your own host. This proposal is for repositories that
>> >> don't need much in the way of issue trackers etc, so shifting away
>> >> from GitHub shouldn't demand anything beyond moving the repos
>> >> themselves.
>> >
>> > I hope we're not proposing to move the issue trackers to github,
>> > otherwise I'm -1 on this PEP.
>> >
>> > Regards
>> >
>> > Antoine.
>>
>> So I usually choose not to weigh in on discussions like these but
>> there seems to be a lot of misdirection in some of these arguments.
>>
>> To start, I'm generally neutral about this proposal or Nick's proposal
>> that spurred this one. I've found the most frustrating part of
>> contributing to anything involving CPython to be the lack of reviewer
>> time. I have had very small (2-line) patches take months (close to a
>> year in reality) to get through in spite of periodically pinging the
>> appropriate people. Moving to git/GitHub will not alleviate this at
>> all.
>>
>> To be clear, the main reasoning behind Nick's was being able to submit
>> changes without ever having to have a local copy of the repository in
>> question on your machine. Having a complete web workflow for editing
>> and contributing makes the barrier to entry far lower than switching
>> VCS or anything else. BitBucket (apparently, although I've never used
>> this) and GitHub both have this capability and *both* are
>> free-as-in-beer systems.
>>
>> No one as I understand it is proposing that we use the per-distro
>> proprietary interface to these websites.
>>
>> All data can be removed from GitHub using it's API and can generally
>> be converted to another platform. The same goes for BitBucket although
>> it's arguably easier to retrieve issue data from BitBucket than
>> GitHub. That said, *the issue tracker is not covered by these
>> proposals* so this is a moot point. Drop it already.
>>
>> If we're seriously considering moving to git as a DVCS, we should
>> consider the real free-as-in-freedom alternatives that come very close
>> to GitHub and can be improved by us (even though they're not written
>> in Python). One of those is GitLab. We can self-host a GitLab instance
>> easily or we can rely on gitlab.com. GitLab aims to provide a very
>> similar user experience to GitHub and it's slowly approaching feature
>> parity and experience parity. GitLab is also what a lot of people
>> chose to switch to after the incident Steven mentioned, which I don't
>> think is something we should dismiss or ignore.
>>
>> We should refocus the discussion with the following in mind:
>>
>> - Migrating "data" from GitHub is easy. There are free-as-in-freedom
>> tools to do it and the only cost is the time it would take to monitor
>> the process
>>
>> - GitHub has a toxic company culture that we should be aware of before
>> moving to it. They have a couple blog posts about attempting to change
>> it but employees became eerily silent after the incident and have
>> remained so from what I've personally seen.
>>
>> - GitHub may be popular but there are popular FOSS solutions that
>> exist that we can also self-host at something like forge.python.org
>>
>> - bugs.python.org is not covered by any of these proposals
>>
>> - The main benefit this proposal (and the proposal to move to
>> BitBucket) are seeking to achieve is an online editing experience
>> allowing for *anyone with a browser and an account* to contribute.
>> This to me is the only reason I would be +1 for either of these
>> proposals (if we can reach consensus).
>>
>
> But that's not just it. As you pointed out, Ian, getting patch submissions
> committed faster would be a huge improvement over what we have today.
> GitHub/Bitbucket/whatever could help with this by giving core devs basic CI
> to know that I patch is sound to some extent as well as push button commits
> of patch

Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 2:16:18 PM Guido van Rossum  wrote:

> On Sun, Nov 30, 2014 at 6:15 AM, Brett Cannon  wrote:
>>
>>  On Sat, Nov 29, 2014, 21:55 Guido van Rossum  wrote:
>>
>> All the use cases seem to be about adding some kind of getattr hook to
>> modules. They all seem to involve modifying the CPython C code anyway. So
>> why not tackle that problem head-on and modify module_getattro() to look
>> for a global named __getattr__ and if it exists, call that instead of
>> raising AttributeError?
>>
>>  Not sure if anyone thought of it. :) Seems like a reasonable solution
>> to me. Be curious to know what the benchmark suite said the impact was.
>>
> Why would there be any impact? The __getattr__ hook would be similar to
> the one on classes -- it's only invoked at the point where otherwise
> AttributeError would be raised.
>

You're right. My brain was thinking __getattribute__ semantics for some
reason.

>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 2:28:31 PM Ethan Furman  wrote:

> On 11/30/2014 11:15 AM, Guido van Rossum wrote:
> > On Sun, Nov 30, 2014 at 6:15 AM, Brett Cannon wrote:
> >> On Sat, Nov 29, 2014, 21:55 Guido van Rossum wrote:
> >>>
> >>> All the use cases seem to be about adding some kind of getattr hook
> >>> to modules. They all seem to involve modifying the CPython C code
> >>> anyway. So why not tackle that problem head-on and modify
> module_getattro()
> >>> to look for a global named __getattr__ and if it exists, call that
> instead
> >>> of raising AttributeError?
> >>
> >> Not sure if anyone thought of it. :) Seems like a reasonable solution
> to me.
> >> Be curious to know what the benchmark suite said the impact was.
> >
> > Why would there be any impact? The __getattr__ hook would be similar to
> the
> > one on classes -- it's only invoked at the point where otherwise
> AttributeError
> > would be raised.
>
> I think the bigger question is how do we support it back on 2.7?
>

You don't; you just can't shoehorn everything back to 2.7.

And just to make sure everyone participating in this discussion is up on
the latest import stuff, Python 3.4 does have Loader.create_module()
<https://docs.python.org/3/library/importlib.html#importlib.abc.Loader.create_module>
which lets you control what object is used for a module in the import
machinery (this is prior to loading, though, so you can't specify it in a
module but at the loader level only). This is how I was able to implement lazy
loading for 3.5
<https://docs.python.org/3.5/library/importlib.html#importlib.util.LazyLoader>
.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 2:33:35 PM Donald Stufft  wrote:

>
> On Nov 30, 2014, at 2:19 PM, Brett Cannon  wrote:
>
> All very true, but if we can't improve both sides then we are simply going
> to end up with even more patches that we take a while to get around to. I
> want to end up with a solution that advances the situation for *both*
> committers and non-committers and I feel like that is being lost in the
> discussion somewhat. As the person who pushed for a migration to DVCS for
> non-committers I totally support improving the workflow for non-committers,
> but not at the cost of ignoring the latter half of the contribution
> workflow of committers which is a chronic problem.
>
> As the PEP points out, the devguide, devinabox, and the PEPs have such a
> shallow development process that hosting them on Bitbucket wouldn't be a
> big thing. But if we don't view this as a long-term step towards moving
> cpython development somehow we are bifurcating our code contributors
> between git and hg which will be annoying. Now it could be argued that it
> doesn't matter for the peps and devguide since they are purely text and can
> be done easily through a web UI and a simple CI in Travis can be set up to
> make sure that the docs compile cleanly. But moving devinabox where there
> is going to be a code checkout in order to execute code for testing, etc.
> will be an issue.
>
> So I guess my view is +0 for doc-only repos on GitHub as long as we make
> it clear we are doing it with the expectation that people will do
> everything through the web UI and never have to know git. But I can't
> advocate moving code over without moving ALL repos over to git -- hosting
> location doesn't matter to me -- to prevent having to know both DVCSs in
> order to do coding work related to Python; the cpython repo shouldn't
> become this vaunted repo that is special and so it's in hg long-term but
> everything else is on git.
>
>
> So a goal of mine here is to sort of use these as a bit of a test bed.
> Moving CPython itself is a big and drastic change with a lot of
> implications, but moving the “support” repositories is not nearly as much,
> especially with a read only mirror on hg.python.org which would allow
> things like the PEP rendering on www.python.org to stay the same if we
> wanted to. My hope was that we’d try this out, see how it works out, and if
> it seems to be a good thing, then at a later time we can either look at
> moving CPython itself or decide if it makes sense to do something
> different. Maybe this should be spelled out in the PEP?
>
> I’ve seen a few people say they were -1 because they didn’t want to split
> between hg on the CPython side and git on the supporting repos side. I’m
> not sure you can really get away from that because we’re *already* in that
> situation, things like the docs building script is a Git repo on Github,
> the python infrastructure itself is a git repo on Github, the new
> python.org website is a git repo on Github, the future PyPI is a git repo
> on GitHub.
>

That doesn't bother as that is support infrastructure around CPython but in
no way directly tied to CPython releases. But devinabox, for instance, is
specifically for helping people contribute to CPython, so asking people to
use devinabox in git but then work in hg for some repos and git in others
that devinabox checks out is just asking for trouble (e.g. devinabox checks
out the peps, devguide, and cpython repos).


>
>
> IOW I’m not sure what the best way forward is. I think moving to these
> tools for *all* repos is likely to be in the best interests of making
> things better for both sides of that coin however I didn’t want to go
> wholesale and try and make everything switch at all at once. If you think
> it makes sense to drop devinabox and make the dividing line between Code
> and not code (although I’d argue that line is already crossed with other
> code things already being on github) that’s fine with me. Or I can expand
> the scope if people think that makes more sense in the PEP too.
>

Depends what other people think, but for me it's "we are going to move to
git long-term and we are starting an experiment with docs on GitHub to see
if that impacts contributions and committer maintenance at least for docs,
maybe for code eventually".
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-30 Thread Brett Cannon
On Sun Nov 30 2014 at 3:55:39 PM Guido van Rossum  wrote:

> On Sun, Nov 30, 2014 at 11:29 AM, Nathaniel Smith  wrote:
>
>> On Sun, Nov 30, 2014 at 2:54 AM, Guido van Rossum 
>> wrote:
>> > All the use cases seem to be about adding some kind of getattr hook to
>> > modules. They all seem to involve modifying the CPython C code anyway.
>> So
>> > why not tackle that problem head-on and modify module_getattro() to
>> look for
>> > a global named __getattr__ and if it exists, call that instead of
>> raising
>> > AttributeError?
>>
>> You need to allow overriding __dir__ as well for tab-completion, and
>> some people wanted to use the properties API instead of raw
>> __getattr__, etc. Maybe someone will want __getattribute__ semantics,
>> I dunno.
>
>
> Hm... I agree about __dir__ but the other things feel too speculative.
>
>
>> So since we're *so close* to being able to just use the
>> subclassing machinery, it seemed cleaner to try and get that working
>> instead of reimplementing bits of it piecewise.
>>
>
> That would really be option 1, right? It's the one that looks cleanest
> from the user's POV (or at least from the POV of a developer who wants to
> build a framework using this feature -- for a simple one-off use case,
> __getattr__ sounds pretty attractive). I think that if we really want
> option 1, the issue of PyModuleType not being a heap type can be dealt with.
>
>
>> That said, __getattr__ + __dir__ would be enough for my immediate use
>> cases.
>
>
>  Perhaps it would be a good exercise to try and write the "lazy submodule
> import"(*) use case three ways: (a) using only CPython 3.4; (b) using
> __class__ assignment; (c) using customizable __getattr__ and __dir__. I
> think we can learn a lot about the alternatives from this exercise. I
> presume there's already a version of (a) floating around, but if it's been
> used in practice at all, it's probably too gnarly to serve as a useful
> comparison (though its essence may be extracted to serve as such).
>
> FWIW I believe all proposals here have a big limitation: the module
> *itself* cannot benefit much from all these shenanigans, because references
> to globals from within the module's own code are just dictionary accesses,
> and we don't want to change that.
>
> (*) I originally wrote "lazy import", but I realized that messing with the
> module class object probably isn't the best way to implement that -- it
> requires a proxy for the module that's managed by an import hook. But if
> you think it's possible, feel free to use this example, as "lazy import"
> seems a pretty useful thing to have in many situations. (At least that's
> how I would do it. And I would probably add some atrocious hack to patch up
> the importing module's globals once the module is actually loaded, to
> reduce the cost of using the proxy over the lifetime of the process.
>

Start at
https://hg.python.org/cpython/file/64bb01bce12c/Lib/importlib/util.py#l207
and read down the rest of the file. It really only requires changing
__class__ to drop the proxy and that's done immediately after the lazy
import. The approach also occurs *after* the finder so you don't get
ImportError for at least missing a file.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-01 Thread Brett Cannon
On Sun Nov 30 2014 at 8:25:25 PM Guido van Rossum  wrote:

> Can we please stop the hg-vs-git discussion? We've established earlier
> that the capabilities of the DVCS itself (hg or git) are not a
> differentiator, and further he-said-she-said isn't going to change
> anybody's opinion.
>

+1 from me as well. I view this as a discussion of platforms and not DVCSs.


>
> What's left is preferences of core developers, possibly capabilities of
> the popular websites (though BitBucket vs. GitHub seems to be a wash as
> well), and preferences of contributors who aren't core developers (using
> popularity as a proxy). It seems the preferences of the core developers are
> mixed, while the preferences of non-core contributors are pretty clear, so
> we have a problem weighing these two appropriately.
>
> Also, let's not get distracted by the needs of the CPython repo, issue
> tracker, and code review tool. Arguments about core developers vs.
> contributors for CPython shouldn't affect the current discussion.
>
> Next, two of the three repos mentioned in Donald's PEP 481 are owned by
> Brett Cannon, according to the Contact column listed on hg.python.org. I
> propose to let Brett choose whether to keep these on hg.python.org, move
> to BitBucket, or move to GitHub. @Brett, what say you? (Apart from "I'm
> tired of the whole thread." :-)
>

You do one or two nice things for python-dev and you end up being saddled
with them for life. ;)

Sure, I can handle the devguide and devinabox decisions since someone has
to and it isn't going to be more "fun" for someone else compared to me.


>
> The third one is the peps repo, which has python-dev@python.org as
> Contact. It turns out that Nick is by far the largest contributor (he
> committed 215 of the most recent 1000 changes) so I'll let him choose.
>

"Perk" of all those packaging PEPs.


>
> Finally, I'd like to get a few more volunteers for the PEP editors list,
> preferably non-core devs: the core devs are already spread too thinly, and
> I really shouldn't be the one who picks new PEP numbers and checks that
> PEPs are well-formed according to the rules of PEP 1. A PEP editor
> shouldn't have to pass judgment on the contents of a PEP (though they may
> choose to correct spelling and grammar). Knowledge of Mercurial is a plus.
> :-)
>

And based on how Nick has been talking, will continue to be at least in the
medium term. =)

-Brett


>
> On Sun, Nov 30, 2014 at 4:50 PM, Donald Stufft  wrote:
>
>>
>> > On Nov 30, 2014, at 7:43 PM, Ben Finney 
>> wrote:
>> >
>> > Donald Stufft  writes:
>> >
>> >> It’s not lost, [… a long, presumably-accurate discourse of the many
>> >> conditions that must be met before …] you can restore it.
>> >
>> > This isn't the place to discuss the details of Git's internals, I think.
>> > I'm merely pointing out that:
>> >
>> >> The important thing to realize is that a “branch” isn’t anything
>> >> special in git.
>> >
>> > Because of that, Ethan's impression – that Git's default behaviour
>> > encourages losing history (by re-writing the history of commits to be
>> > other than what they were) is true, and “Git never loses history” simply
>> > isn't true.
>> >
>> > Whether that is a *problem* is a matter of debate, but the fact that
>> > Git's common workflow commonly discards information that some consider
>> > valuable, is a simple fact.
>> >
>> > If Ethan chooses to make that a factor in his decisions about Git, the
>> > facts are on his side.
>>
>> Except it’s not true at all.
>>
>> That data is all still there if you want it to exist and it’s not a real
>> differentiator between Mercurial and git because Mercurial has the ability
>> to do the same thing. Never mind the fact that “lose” your history makes
>> it
>> sound accidental instead of on purpose. It’s like saying that ``rm
>> foo.txt``
>> will “lose” the data in foo.txt. So either it was a misunderstanding in
>> which case I wanted to point out that those operations don’t magically
>> lose
>> information or it’s a purposely FUDish statement in which case I want to
>> point out that the statement is inaccurate.
>>
>> The only thing that is true is that git users are more likely to use the
>> ability to rewrite history than Mercurial users are, but you’ll typically
>> find that people generally don’t do this on public branches, only on
>> private
>> branches. Which again doesn

Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
So I was waiting for Nick to say what he wanted to do for the peps repo
since I view it as I get 2/3 of the choices and he gets the other third.

The way I view it, the options are:

   1. Move to GitHub
   2. Move to Bitbucket
   3. Improve our current tooling (either through new hosting setup and/or
   adding first-world support for downloading PRs from GitHub/Bitbucket)

Regardless of what we do, I think we should graduate the mirrors on GitHub
and Bitbucket to "official" -- for the proposed repos and cpython -- and
get their repos updating per-push instead of as a cron job. I also think we
should also flip on any CI we can (e.g. turn on Travis for GitHub along
with coveralls support using coverage.py's encodings trick
). This will
get us the most accessible repo backups as well as the widest tool coverage
for contributors to assist them in their contributions (heck, even if we
just get regular coverage reports for Python that would be a great win out
of all of this).

Now as for whether we should move the repos, I see two possibilities to
help make that decision. One is we end up with 3 PEPs corresponding to the
3 proposals outlined above, get them done before PyCon, and then we have a
discussion at the language summit where we can either make a decision or
see what the pulse at the conference and sprints then make a decision
shortly thereafter (I can moderate the summit discussion to keep this
on-task and minimize the rambling; if Guido wants I can even make the final
call since I have already played the role of "villain" for our issue
tracker and hg decisions).

The other option is we take each one of the 3 proposed repos and
pilot/experiment with them on a different platform. I would put peps on
GitHub (as per Guido's comment of getting PRs from there already), the
devguide on Bitbucket, and leave devinabox on hg.python.org but with the
motivation of getting better tooling in place to contribute to it. We can
then see if anything changes between now and PyCon and then discuss what
occurred there (if we can't get the word out about this experiment and get
new tooling up and going on the issue tracker in the next 4 months then
that's another data point about how much people do/don't care about any of
this). Obviously if we end up needing more time we don't *have* to make a
decision at PyCon, but it's a good goal to have. I don't think we can
cleanly replicate a single repo on all three solutions as I sure don't want
to deal with that merging fun (unless someone comes forward to be basically
a "release manager" for one of the repos to make that experiment happen).

So do people want PEPs or experimentation first?

On Tue Dec 02 2014 at 8:24:16 AM Nick Coghlan  wrote:

> On 2 December 2014 at 01:38, Guido van Rossum  wrote:
> > As far as I'm concerned I'm just waiting for your decision now.
>
> The RhodeCode team got in touch with me offline to suggest the
> possibility of using RhodeCode Enterprise as a self-hosted solution
> rather than a volunteer-supported installation of Kallithea. I'll be
> talking to them tomorrow, and if that discussion goes well, will
> update PEP 474 (and potentially PEP 462) accordingly.
>
> Given that that would take away the "volunteer supported" vs
> "commercially supported" distinction between self-hosting and using
> GitHub (as well as potentially building a useful relationship that may
> help us resolve other workflow issues in the future), I'd like us to
> hold off on any significant decisions regarding the fate of any of the
> repos until I've had a chance to incorporate the results of that
> discussion into my proposals.
>
> As described in PEP 474, I'm aware of the Mercurial team's concerns
> with RhodeCode's current licensing, but still consider it a superior
> alternative to an outright proprietary solution that doesn't get us
> any closer to solving the workflow problems with the main CPython
> repo.
>
> Regards,
> Nick.
>
> P.S. I'll also bring up some of the RFEs raised in this discussion
> around making it possible for folks to submit pull requests via
> GitHub/BitBucket, even if the master repositories are hosted on PSF
> infrastructure.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
On Tue Dec 02 2014 at 1:05:22 PM Guido van Rossum  wrote:

> Thanks for taking charge, Brett.
>
> I personally think this shouldn't be brought up at the summit -- it's
> likely to just cause lots of heat about git vs. hg, free vs. not-free,
> "loyalty" to free or open tools, the weighing of core committers'
> preferences vs. outside contributors' preferences, GitHub's diversity track
> record, with no new information added. Even if we *just* had a vote by
> show-of-hands at the summit that would just upset those who couldn't be
> present.
>

Well, if I'm going to be the Great Decider on this then I can say upfront
I'm taking a pragmatic view of preferring open but not mandating it,
preferring hg over git but not ruling out a switch, preferring Python-based
tools but not viewing it as a negative to not use Python, etc. I would like
to think I have earned somewhat of a reputation of being level-headed and
so none of this should really be a surprise to anyone.

So if we did have a discussion at the summit and someone decided to argue
for FLOSS vs. not as a key factor then I would politely cut them off and
say that doesn't matter to me and move on.  As I said, I would moderate the
conversation to keep it on-task and not waste my time with points that have
already been made and flagged by me and you as not deal-breakers. And any
votes would be to gauge the feeling of the room and not as a binding
decision; I assume either me or someone else is going to be the dictator on
this and this won't be a majority decision.


>
> But I'll leave that up to you. The only thing I ask you is not to give me
> the last word. I might just do something you regret. :-)
>

What about me doing something that *I* regret like taking this on? =)

-Brett


>
> --Guido
>
> On Tue, Dec 2, 2014 at 8:50 AM, Brett Cannon  wrote:
>
>> So I was waiting for Nick to say what he wanted to do for the peps repo
>> since I view it as I get 2/3 of the choices and he gets the other third.
>>
>> The way I view it, the options are:
>>
>>1. Move to GitHub
>>2. Move to Bitbucket
>>3. Improve our current tooling (either through new hosting setup
>>and/or adding first-world support for downloading PRs from 
>> GitHub/Bitbucket)
>>
>> Regardless of what we do, I think we should graduate the mirrors on
>> GitHub and Bitbucket to "official" -- for the proposed repos and cpython --
>> and get their repos updating per-push instead of as a cron job. I also
>> think we should also flip on any CI we can (e.g. turn on Travis for GitHub
>> along with coveralls support using coverage.py's encodings trick
>> <https://hg.python.org/devinabox/file/1eeb96fe98f1/README#l124>). This
>> will get us the most accessible repo backups as well as the widest tool
>> coverage for contributors to assist them in their contributions (heck, even
>> if we just get regular coverage reports for Python that would be a great
>> win out of all of this).
>>
>> Now as for whether we should move the repos, I see two possibilities to
>> help make that decision. One is we end up with 3 PEPs corresponding to the
>> 3 proposals outlined above, get them done before PyCon, and then we have a
>> discussion at the language summit where we can either make a decision or
>> see what the pulse at the conference and sprints then make a decision
>> shortly thereafter (I can moderate the summit discussion to keep this
>> on-task and minimize the rambling; if Guido wants I can even make the final
>> call since I have already played the role of "villain" for our issue
>> tracker and hg decisions).
>>
>> The other option is we take each one of the 3 proposed repos and
>> pilot/experiment with them on a different platform. I would put peps on
>> GitHub (as per Guido's comment of getting PRs from there already), the
>> devguide on Bitbucket, and leave devinabox on hg.python.org but with the
>> motivation of getting better tooling in place to contribute to it. We can
>> then see if anything changes between now and PyCon and then discuss what
>> occurred there (if we can't get the word out about this experiment and get
>> new tooling up and going on the issue tracker in the next 4 months then
>> that's another data point about how much people do/don't care about any of
>> this). Obviously if we end up needing more time we don't *have* to make
>> a decision at PyCon, but it's a good goal to have. I don't think we can
>> cleanly replicate a single repo on all three solutions as I sure don't want
>> to deal with that merging fun (unless som

Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
On Tue Dec 02 2014 at 1:52:49 PM Antoine Pitrou  wrote:

> On Tue, 02 Dec 2014 18:21:39 +
> Brett Cannon  wrote:
> >
> > So if we did have a discussion at the summit and someone decided to argue
> > for FLOSS vs. not as a key factor then I would politely cut them off and
> > say that doesn't matter to me and move on.  As I said, I would moderate
> the
> > conversation to keep it on-task and not waste my time with points that
> have
> > already been made and flagged by me and you as not deal-breakers. And any
> > votes would be to gauge the feeling of the room and not as a binding
> > decision; I assume either me or someone else is going to be the dictator
> on
> > this and this won't be a majority decision.
>
> Can we stop making decisions at summits where it's always the same
> people being present?
>

I already said I'm not going to make a decision there, but you have to
admit having an in-person discussion is a heck of a lot easier than going
back and forth in email and so I'm not willing to rule out at least talking
about the topic at PyCon. I wouldn't hold it against a BDFAP talking about
something at EuroPython and happening to make a decision while there and so
I would expect the same courtesy.

-Brett


>
> Thanks
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw  wrote:

> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote:
>
> >Well, if I'm going to be the Great Decider on this then I can say upfront
> >I'm taking a pragmatic view of preferring open but not mandating it,
> >preferring hg over git but not ruling out a switch, preferring
> Python-based
> >tools but not viewing it as a negative to not use Python, etc. I would
> like
> >to think I have earned somewhat of a reputation of being level-headed and
> >so none of this should really be a surprise to anyone.
>
> I think it's equally important to describe what criteria you will use to
> make
> this decision.  E.g. are you saying all these above points will be
> completely
> ignored, or all else being equal, they will help tip the balance?
>

Considering Guido just gave me this position I have not exactly had a ton
of time to think the intricacies out, but they are all positives and can
help tip the balance or break ties (I purposely worded all of that with
"prefer", etc.). For instance, if a FLOSS solution came forward that looked
to be good and close enough to what would be a good workflow along with
support commitments from the infrastructure team and folks to maintain the
code -- and this will have to people several people as experience with the
issue tracker has shown -- then that can help tip over the closed-source,
hosted solution which might have some perks. As for Python over something
else, that comes into play in open source more from a maintenance
perspective, but for closed source it would be a tie-breaker only since it
doesn't exactly influence the usability of the closed-source solution like
it does an open-source one.

Basically I'm willing to give brownie points for open source and Python
stuff, but it is just that: points and not deal-breakers.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
On Tue Dec 02 2014 at 2:15:09 PM Donald Stufft  wrote:

>
> On Dec 2, 2014, at 2:09 PM, Brett Cannon  wrote:
>
>
>
> On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw  wrote:
>
>> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote:
>>
>> >Well, if I'm going to be the Great Decider on this then I can say upfront
>> >I'm taking a pragmatic view of preferring open but not mandating it,
>> >preferring hg over git but not ruling out a switch, preferring
>> Python-based
>> >tools but not viewing it as a negative to not use Python, etc. I would
>> like
>> >to think I have earned somewhat of a reputation of being level-headed and
>> >so none of this should really be a surprise to anyone.
>>
>> I think it's equally important to describe what criteria you will use to
>> make
>> this decision.  E.g. are you saying all these above points will be
>> completely
>> ignored, or all else being equal, they will help tip the balance?
>>
>
> Considering Guido just gave me this position I have not exactly had a ton
> of time to think the intricacies out, but they are all positives and can
> help tip the balance or break ties (I purposely worded all of that with
> "prefer", etc.). For instance, if a FLOSS solution came forward that looked
> to be good and close enough to what would be a good workflow along with
> support commitments from the infrastructure team and folks to maintain the
> code -- and this will have to people several people as experience with the
> issue tracker has shown -- then that can help tip over the closed-source,
> hosted solution which might have some perks. As for Python over something
> else, that comes into play in open source more from a maintenance
> perspective, but for closed source it would be a tie-breaker only since it
> doesn't exactly influence the usability of the closed-source solution like
> it does an open-source one.
>
> Basically I'm willing to give brownie points for open source and Python
> stuff, but it is just that: points and not deal-breakers.
>
>
> This sounds like a pretty reasonable attitude to take towards this.
>
> If we’re going to be experimenting/talking things over, should I withdraw
> my PEP?
>

No because only two people have said they like the experiment idea so
that's not exactly enough to say it's worth the effort. =) Plus GitHub
could be chosen in the end.

Basically a PEP staying in draft is no big deal.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
I should say I will take a few days to think about this and then I will
start a new thread outlining what I think we should be aiming for to help
frame the whole discussion and to give proponents something to target.

On Tue Dec 02 2014 at 2:20:16 PM Brett Cannon  wrote:

> On Tue Dec 02 2014 at 2:15:09 PM Donald Stufft  wrote:
>
>>
>> On Dec 2, 2014, at 2:09 PM, Brett Cannon  wrote:
>>
>>
>>
>> On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw  wrote:
>>
>>> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote:
>>>
>>> >Well, if I'm going to be the Great Decider on this then I can say
>>> upfront
>>> >I'm taking a pragmatic view of preferring open but not mandating it,
>>> >preferring hg over git but not ruling out a switch, preferring
>>> Python-based
>>> >tools but not viewing it as a negative to not use Python, etc. I would
>>> like
>>> >to think I have earned somewhat of a reputation of being level-headed
>>> and
>>> >so none of this should really be a surprise to anyone.
>>>
>>> I think it's equally important to describe what criteria you will use to
>>> make
>>> this decision.  E.g. are you saying all these above points will be
>>> completely
>>> ignored, or all else being equal, they will help tip the balance?
>>>
>>
>> Considering Guido just gave me this position I have not exactly had a ton
>> of time to think the intricacies out, but they are all positives and can
>> help tip the balance or break ties (I purposely worded all of that with
>> "prefer", etc.). For instance, if a FLOSS solution came forward that looked
>> to be good and close enough to what would be a good workflow along with
>> support commitments from the infrastructure team and folks to maintain the
>> code -- and this will have to people several people as experience with the
>> issue tracker has shown -- then that can help tip over the closed-source,
>> hosted solution which might have some perks. As for Python over something
>> else, that comes into play in open source more from a maintenance
>> perspective, but for closed source it would be a tie-breaker only since it
>> doesn't exactly influence the usability of the closed-source solution like
>> it does an open-source one.
>>
>> Basically I'm willing to give brownie points for open source and Python
>> stuff, but it is just that: points and not deal-breakers.
>>
>>
>> This sounds like a pretty reasonable attitude to take towards this.
>>
>> If we’re going to be experimenting/talking things over, should I withdraw
>> my PEP?
>>
>
> No because only two people have said they like the experiment idea so
> that's not exactly enough to say it's worth the effort. =) Plus GitHub
> could be chosen in the end.
>
> Basically a PEP staying in draft is no big deal.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github

2014-12-02 Thread Brett Cannon
On Tue Dec 02 2014 at 3:14:20 PM Barry Warsaw  wrote:

> On Dec 02, 2014, at 07:20 PM, Brett Cannon wrote:
>
> >No because only two people have said they like the experiment idea so
> >that's not exactly enough to say it's worth the effort. =) Plus GitHub
> >could be chosen in the end.
>
> Experimenting could be useful, although if the traffic is disproportionate
> (e.g. peps are updated way more often than devinabox) or folks don't
> interact
> with each of the repos, it might not be very representative.  Still, I
> think
> it's better to get a visceral sense of how things actually work than to
> speculate about how they *might* work.
>

That's my thinking. It's more about the workflow than measuring engagement
on GitHub vs. Bitbucket (we already know how that skews). If I had my wish
we would put the same repo in all three scenarios, but that is just asking
for merge headaches.

But I think if we go to the community and say, "help us test dev workflows
by submitting spelling and grammar fixes" then we should quickly get an
idea of the workflows (and I purposefully left devinabox out of a move
since it is never touched after it essentially became a build script and a
README and so represents our existing workflow; any benefit on our own
infrastructure can go straight to cpython anyway which we can all
experience).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] My thinking about the development process

2014-12-05 Thread Brett Cannon
This is a bit long as I espoused as if this was a blog post to try and give
background info on my thinking, etc. The TL;DR folks should start at the
"Ideal Scenario" section and read to the end.

P.S.: This is in Markdown and I have put it up at
https://gist.github.com/brettcannon/a9c9a5989dc383ed73b4 if you want a
nicer formatted version for reading.

# History lesson
Since I signed up for the python-dev mailing list way back in June 2002,
there seems to be a cycle where we as a group come to a realization that
our current software development process has not kept up with modern
practices and could stand for an update. For me this was first shown when
we moved from SourceForge to our own infrastructure, then again when we
moved from Subversion to Mercurial (I led both of these initiatives, so
it's somewhat a tradition/curse I find myself in this position yet again).
And so we again find ourselves at the point of realizing that we are not
keeping up with current practices and thus need to evaluate how we can
improve our situation.

# Where we are now
Now it should be realized that we have to sets of users of our development
process: contributors and core developers (the latter whom can play both
roles). If you take a rough outline of our current, recommended process it
goes something like this:

1. Contributor clones a repository from hg.python.org
2. Contributor makes desired changes
3. Contributor generates a patch
4. Contributor creates account on bugs.python.org and signs the
   [contributor agreement](https://www.python.org/psf/contrib/contrib-form/)
4. Contributor creates an issue on bugs.python.org (if one does not already
exist) and uploads a patch
5. Core developer evaluates patch, possibly leaving comments through our
[custom version of Rietveld](http://bugs.python.org/review/)
6. Contributor revises patch based on feedback and uploads new patch
7. Core developer downloads patch and applies it to a clean clone
8. Core developer runs the tests
9. Core developer does one last `hg pull -u` and then commits the changes
to various branches

I think we can all agree it works to some extent, but isn't exactly smooth.
There are multiple steps in there -- in full or partially -- that can be
automated. There is room to improve everyone's lives.

And we can't forget the people who help keep all of this running as well.
There are those that manage the SSH keys, the issue tracker, the review
tool, hg.python.org, and the email system that let's use know when stuff
happens on any of these other systems. The impact on them needs to also be
considered.

## Contributors
I see two scenarios for contributors to optimize for. There's the simple
spelling mistake patches and then there's the code change patches. The
former is the kind of thing that you can do in a browser without much
effort and should be a no-brainer commit/reject decision for a core
developer. This is what the GitHub/Bitbucket camps have been promoting
their solution for solving while leaving the cpython repo alone.
Unfortunately the bulk of our documentation is in the Doc/ directory of
cpython. While it's nice to think about moving the devguide, peps, and even
breaking out the tutorial to repos hosting on Bitbucket/GitHub, everything
else is in Doc/ (language reference, howtos, stdlib, C API, etc.). So
unless we want to completely break all of Doc/ out of the cpython repo and
have core developers willing to edit two separate repos when making changes
that impact code **and** docs, moving only a subset of docs feels like a
band-aid solution that ignores the big, white elephant in the room: the
cpython repo, where a bulk of patches are targeting.

For the code change patches, contributors need an easy way to get a hold of
the code and get their changes to the core developers. After that it's
things like letting contributors knowing that their patch doesn't apply
cleanly, doesn't pass tests, etc. As of right now getting the patch into
the issue tracker is a bit manual but nothing crazy. The real issue in this
scenario is core developer response time.

## Core developers
There is a finite amount of time that core developers get to contribute to
Python and it fluctuates greatly. This means that if a process can be found
which allows core developers to spend less time doing mechanical work and
more time doing things that can't be automated -- namely code reviews --
then the throughput of patches being accepted/rejected will increase. This
also impacts any increased patch submission rate that comes from improving
the situation for contributors because if the throughput doesn't change
then there will simply be more patches sitting in the issue tracker and
that doesn't benefit anyone.

# My ideal scenario
If I had an infinite amount of resources (money, volunteers, time, etc.),
this would be my ideal scenario:

1. Contributor gets code from wherever; easiest to just say "fork on GitHub
or Bitbucket" as they would be official mirrors of hg.python.org and are

[Python-Dev] Python 2/3 porting HOWTO has been updated

2014-12-05 Thread Brett Cannon
It now promotes using tooling as much as possible to automate the process
of making code by Python 2/3 source-compatible:
https://docs.python.org/3.5/howto/pyporting.html

Blog post about it at
http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23
.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2/3 porting HOWTO has been updated

2014-12-05 Thread Brett Cannon
On Fri Dec 05 2014 at 4:07:46 PM Benjamin Peterson 
wrote:

>
>
> On Fri, Dec 5, 2014, at 16:04, Brett Cannon wrote:
> > It now promotes using tooling as much as possible to automate the process
> > of making code by Python 2/3 source-compatible:
> > https://docs.python.org/3.5/howto/pyporting.html
>
> Are you going to update the 2.7 copy of the howto, too?
>

Have not decided yet. All the Google searches I have tried that bring up
the HOWTO use the Python 3 version. Plus I know people are going to find
mistakes that require fixing so I would rather wait until it stabilizes
before I bother backporting to 2.7.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2/3 porting HOWTO has been updated

2014-12-06 Thread Brett Cannon
Thanks for the feedback. I'll update the doc probably on Friday.

On Sat Dec 06 2014 at 12:41:54 AM Nick Coghlan  wrote:

> On 6 December 2014 at 14:40, Nick Coghlan  wrote:
> > On 6 December 2014 at 10:44, Benjamin Peterson 
> wrote:
> >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote:
> >>> Do we need to update it? Can it just redirect to the 3 version?
> >>
> >> Technically, yes, of course. However, that would unexpected take you out
> >> of the Python 2 docs "context". Also, that doesn't solve the problem for
> >> the downloadable versions of the docs.
> >
> > As Benjamin says, we'll likely want to update the Python 2 version
> > eventually for the benefit of the downloadable version of the docs,
> > but Brett's also right it makes sense to wait for feedback on the
> > Python 3 version and then backport the most up to date text wholesale.
> >
> > In terms of the text itself, this is a great update Brett - thanks!
> >
> > A couple of specific notes:
> >
> > * http://python-future.org/compatible_idioms.html is my favourite
> > short list of "What are the specific Python 2 only habits that I need
> > to unlearn in order to start writing 2/3 compatible code?". It could
> > be worth mentioning in addition to the What's New documents and the
> > full Python 3 Porting book.
> >
> > * it's potentially worth explicitly noting the "bytes(index_value)"
> > and "str(bytes_value)" traps when discussing the bytes/text changes.
> > Those do rather different things in Python 2 & 3, but won't emit an
> > error or warning in either version.
>
> Given that 3.4 and 2.7.9 will be the first exposure some users will
> have had to pip, would it perhaps be worth explicitly mentioning the
> "pip install " commands for the various tools? At least pylint's
> PyPI page only gives the manual download instructions, including which
> dependencies you will need to install.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 3:24:38 PM Donald Stufft  wrote:

>
> On Dec 5, 2014, at 3:04 PM, Brett Cannon  wrote:
> 
>
>
> This looks like a pretty good write up, seems to pretty fairly evaluate
> the various sides and the various concerns.
>

Thanks! It seems like I have gotten the point across that I don't care what
the solution is as long as it's a good one and that we have to look at the
whole process and not just a corner of it if we want big gains.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
wrote:

> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow 
> wrote:
> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
> > > We don't exactly have a ton of people
> > > constantly going "I'm so bored because everything for Python's
> development
> > > infrastructure gets sorted so quickly!" A perfect example is that R.
> David
> > > Murray came up with a nice update for our workflow after PyCon but
> then ran
> > > out of time after mostly defining it and nothing ever became of it
> (maybe we
> > > can rectify that at PyCon?). Eric Snow has pointed out how he has
> written
> > > similar code for pulling PRs from I think GitHub to another code review
> > > tool, but that doesn't magically make it work in our infrastructure or
> get
> > > someone to write it and help maintain it (no offense, Eric).
> >
> > None taken.  I was thinking the same thing when I wrote that. :)
> >
> > >
> > > IOW our infrastructure can do anything, but it can't run on hopes and
> > > dreams. Commitments from many people to making this happen by a certain
> > > deadline will be needed so as to not allow it to drag on forever.
> People
> > > would also have to commit to continued maintenance to make this viable
> > > long-term.
>
> The biggest blocker to my actually working the proposal I made was that
> people wanted to see it in action first, which means I needed to spin up
> a test instance of the tracker and do the work there.  That barrier to
> getting started was enough to keep me from getting started...even though
> the barrier isn't *that* high (I've done it before, and it is easier now
> than it was when I first did it), it is still a *lot* higher than
> checking out CPython and working on a patch.
>
> That's probably the biggest issue with *anyone* contributing to tracker
> maintenance, and if we could solve that, I think we could get more
> people interested in helping maintain it.  We need the equivalent of
> dev-in-a-box for setting up for testing proposed changes to
> bugs.python.org, but including some standard way to get it deployed so
> others can look at a live system running the change in order to review
> the patch.
>

Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
the past week, but this just screams "container" to me which would make
getting a test instance set up dead simple.


>
> Maybe our infrastructure folks will have a thought or two about this?
> I'm willing to put some work into this if we can figure out what
> direction to head in.  It could well be tied in to moving
> bugs.python.org in with the rest of our infrastructure, something I know
> Donald has been noodling with off and on; and I'm willing to help with
> that as well.
>
> It sounds like being able to propose and test changes to our Roundup
> instance (and test other services talking to Roundup, before deploying
> them for real) is going to be critical to improving our workflow no
> matter what other decisions are made, so we need to make it easier to
> do.
>
> In other words, it seems like the key to improving the productivity of
> our CPython patch workflow is to improve the productivity of the patch
> workflow for our key workflow resource, bugs.python.org.
>

Quite possible and since no one is suggesting we drop bugs.python.org it's
a worthy goal to have regardless of what PEP gets accepted.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 2:53:43 AM Terry Reedy  wrote:

> On 12/5/2014 3:04 PM, Brett Cannon wrote:
>
> > 1. Contributor clones a repository from hg.python.org <
> http://hg.python.org>
> > 2. Contributor makes desired changes
> > 3. Contributor generates a patch
> > 4. Contributor creates account on bugs.python.org
> > <http://bugs.python.org> and signs the
> > [contributor
> > agreement](https://www.python.org/psf/contrib/contrib-form/)
>
> I would like to have the process of requesting and enforcing the signing
> of CAs automated.
>

So would I.


>
> > 4. Contributor creates an issue on bugs.python.org
> > <http://bugs.python.org> (if one does not already exist) and uploads a
> patch
>
> I would like to have patches rejected, or at least held up, until a CA
> is registered.  For this to work, a signed CA should be immediately
> registered on the tracker, at least as 'pending'.  It now can take a
> week or more to go through human processing.
>

This is one of the reasons I didn't want to create an issue magically from
PRs initially. I think it's totally doable with some coding.

-Brett


>
>
> > 5. Core developer evaluates patch, possibly leaving comments through our
> > [custom version of Rietveld](http://bugs.python.org/review/)
> > 6. Contributor revises patch based on feedback and uploads new patch
> > 7. Core developer downloads patch and applies it to a clean clone
> > 8. Core developer runs the tests
> > 9. Core developer does one last `hg pull -u` and then commits the
> > changes to various branches
>
> --
> Terry Jan Reedy
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 5:17:35 PM Eric Snow 
wrote:

> Very nice, Brett.
>

Thanks!


>
> On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
> > And we can't forget the people who help keep all of this running as well.
> > There are those that manage the SSH keys, the issue tracker, the review
> > tool, hg.python.org, and the email system that let's use know when stuff
> > happens on any of these other systems. The impact on them needs to also
> be
> > considered.
>
> It sounds like Guido would rather as much of this was done by a
> provider rather than relying on volunteers.  That makes sense though
> there are concerns about control of certain assents.  However, that
> applies only to some, like hg.python.org.
>

Sure, but that's also the reason Guido stuck me with the job of being the
Great Decider on this. =) I have a gut feeling of how much support would
need to be committed in order to consider things covered well enough (I
can't give a number because it will vary depending on who steps forward;
someone who I know and trust to stick around is worth more than someone who
kindly steps forward and has never volunteered, but that's just because I
don't know the stranger and not because I don't want people who are unknown
on python-dev to step forward innately).


>
> >
> > ## Contributors
> > I see two scenarios for contributors to optimize for. There's the simple
> > spelling mistake patches and then there's the code change patches. The
> > former is the kind of thing that you can do in a browser without much
> effort
> > and should be a no-brainer commit/reject decision for a core developer.
> This
> > is what the GitHub/Bitbucket camps have been promoting their solution for
> > solving while leaving the cpython repo alone. Unfortunately the bulk of
> our
> > documentation is in the Doc/ directory of cpython. While it's nice to
> think
> > about moving the devguide, peps, and even breaking out the tutorial to
> repos
> > hosting on Bitbucket/GitHub, everything else is in Doc/ (language
> reference,
> > howtos, stdlib, C API, etc.). So unless we want to completely break all
> of
> > Doc/ out of the cpython repo and have core developers willing to edit two
> > separate repos when making changes that impact code **and** docs, moving
> > only a subset of docs feels like a band-aid solution that ignores the
> big,
> > white elephant in the room: the cpython repo, where a bulk of patches are
> > targeting.
>
> With your ideal scenario this would be a moot point, right?  There
> would be no need to split out doc-related repos.
>

Exactly, which is why I stressed we can't simply ignore the cpython repo.
If someone is bored they could run an analysis on the various repos,
calculate the number of contributions for outsiders -- maybe check the logs
for the use of the work "Thank" since we typically say "Thanks to ..." --
and see how many external contributions we got in all the repos and also a
detailed breakdown for Doc/.


>
> >
> > For the code change patches, contributors need an easy way to get a hold
> of
> > the code and get their changes to the core developers. After that it's
> > things like letting contributors knowing that their patch doesn't apply
> > cleanly, doesn't pass tests, etc.
>
> This is probably more work than it seems at first.
>

Maybe, maybe not. Depends on what external services someone wants to rely
on. E.g., could a webhook with some CI company be used so that it's more
"grab the patch from here and run the tests" vs. us having to manage the
whole CI infrastructure? Just because the home-grown solution requires
developers and maintenance doesn't mean that the maintenance is more
maintaining the code to interface with an external service provider instead
of providing the service ourselves from scratch. And don't forget companies
will quite possibly donate services if you ask or the PSF could pay for
some things.


>
> > As of right now getting the patch into the
> > issue tracker is a bit manual but nothing crazy. The real issue in this
> > scenario is core developer response time.
> >
> > ## Core developers
> > There is a finite amount of time that core developers get to contribute
> to
> > Python and it fluctuates greatly. This means that if a process can be
> found
> > which allows core developers to spend less time doing mechanical work and
> > more time doing things that can't be automated -- namely code reviews --
> > then the throughput of patches being accepted/rejected will increase.
> This
> > also impacts any increased patch 

Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft  wrote:

>
> On Dec 6, 2014, at 9:11 AM, Brett Cannon  wrote:
>
>
>
> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> wrote:
>
>> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow <
>> ericsnowcurren...@gmail.com> wrote:
>> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
>> > > We don't exactly have a ton of people
>> > > constantly going "I'm so bored because everything for Python's
>> development
>> > > infrastructure gets sorted so quickly!" A perfect example is that R.
>> David
>> > > Murray came up with a nice update for our workflow after PyCon but
>> then ran
>> > > out of time after mostly defining it and nothing ever became of it
>> (maybe we
>> > > can rectify that at PyCon?). Eric Snow has pointed out how he has
>> written
>> > > similar code for pulling PRs from I think GitHub to another code
>> review
>> > > tool, but that doesn't magically make it work in our infrastructure
>> or get
>> > > someone to write it and help maintain it (no offense, Eric).
>> >
>> > None taken.  I was thinking the same thing when I wrote that. :)
>> >
>> > >
>> > > IOW our infrastructure can do anything, but it can't run on hopes and
>> > > dreams. Commitments from many people to making this happen by a
>> certain
>> > > deadline will be needed so as to not allow it to drag on forever.
>> People
>> > > would also have to commit to continued maintenance to make this viable
>> > > long-term.
>>
>> The biggest blocker to my actually working the proposal I made was that
>> people wanted to see it in action first, which means I needed to spin up
>> a test instance of the tracker and do the work there.  That barrier to
>> getting started was enough to keep me from getting started...even though
>> the barrier isn't *that* high (I've done it before, and it is easier now
>> than it was when I first did it), it is still a *lot* higher than
>> checking out CPython and working on a patch.
>>
>> That's probably the biggest issue with *anyone* contributing to tracker
>> maintenance, and if we could solve that, I think we could get more
>> people interested in helping maintain it.  We need the equivalent of
>> dev-in-a-box for setting up for testing proposed changes to
>> bugs.python.org, but including some standard way to get it deployed so
>> others can look at a live system running the change in order to review
>> the patch.
>>
>
> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> the past week, but this just screams "container" to me which would make
> getting a test instance set up dead simple.
>
>
> Heh, one of my thoughts on deploying the bug tracker into production was
> via a container, especially since we have multiple instances of it. I got
> side tracked on getting the rest of the infrastructure readier for a web
> application and some improvements there as well as getting a big postgresql
> database cluster set up (2x 15GB RAM servers running in Primary/Replica
> mode). The downside of course to this is that afaik Docker is a lot harder
> to use on Windows and to some degree OS X than linux. However if the
> tracker could be deployed as a docker image that would make the
> infrastructure side a ton easier. I also have control over the python/
> organization on Docker Hub too for whatever uses we have for it.
>

I think it's something worth thinking about, but like you I don't know if
the containers work on OS X or Windows (I don't work with containers
personally).


>
> Unrelated to the tracker:
>
> Something that any PEP should consider is security, particularly that of
> running the tests. Currently we have a buildbot fleet that checks out the
> code and executes the test suite (aka code). A problem that any pre-merge
> test runner needs to solve is that unlike a post-merge runner, which will
> only run code that has been committed by a committer, a pre-merge runner
> will run code that _anybody_ has submitted. This means that it’s not merely
> enough to simply trigger a build in our buildbot fleet prior to the merge
> happening as that would allow anyone to execute arbitrary code there. As
> far as I’m aware there are two solutions to this problem in common use,
> either use throw away environments/machines/containers that isolate the
> running code and then get destroyed after each test run, or don’t run the
> pre-merge tests immediately unless it’s 

Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 10:30:54 AM Nick Coghlan  wrote:

> On 7 December 2014 at 00:11, Brett Cannon  wrote:
> > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> > wrote:
> >>
> >> That's probably the biggest issue with *anyone* contributing to tracker
> >> maintenance, and if we could solve that, I think we could get more
> >> people interested in helping maintain it.  We need the equivalent of
> >> dev-in-a-box for setting up for testing proposed changes to
> >> bugs.python.org, but including some standard way to get it deployed so
> >> others can look at a live system running the change in order to review
> >> the patch.
> >
> >
> > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> the
> > past week, but this just screams "container" to me which would make
> getting
> > a test instance set up dead simple.
>
> It's not just you (and Graham Dumpleton has even been working on
> reference images for Apache/mod_wsgi hosting of Python web services:
> http://blog.dscpl.com.au/2014/12/hosting-python-wsgi-
> applications-using.html)
>
> You still end up with Vagrant as a required element for Windows and
> Mac OS X, but that's pretty much a given for a lot of web service
> development these days.
>

If we need a testbed then we could try it out with a devinabox and see how
it works with new contributors at PyCon. Would be nice to just have Clang,
all the extras for the stdlib, etc. already pulled together for people to
work from.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-08 Thread Brett Cannon
On Mon Dec 08 2014 at 3:27:43 PM Jim J. Jewett  wrote:

>
>
> Brett Cannon wrote:
> > 4. Contributor creates account on bugs.python.org and signs the
> >   [contributor agreement](https://www.python.
> org/psf/contrib/contrib-form/)
>
> Is there an expiration on such forms?  If there doesn't need to be
> (and one form is good for multiple tickets), is there an objection
> (besides "not done yet") to making "signed the form" part of the bug
> reporter account, and required to submit to the CI process?  (An "I
> can't sign yet, bug me later" option would allow the current workflow
> without the "this isn't technically a patch" workaround for "small enough"
> patches from those with slow-moving employers.)
>

IANAL but I believe that as long as you didn't sign on behalf of work for
your employer it's good for life.


>
>
> > There's the simple spelling mistake patches and then there's the
> > code change patches.
>
> There are a fair number of one-liner code patches; ideally, they
> could also be handled quickly.
>

Depends on the change. Syntactic typos could still get through. But yes,
they are also a possibility for a quick submission.


>
> > For the code change patches, contributors need an easy way to get a hold
> of
> > the code and get their changes to the core developers.
>
> For a fair number of patches, the same workflow as spelling errors is
> appropriate, except that it would be useful to have an automated state
> saying "yes, this currently merges fine", so that committers can focus
> only on patches that are (still) at least that ready.
>
> > At best core developers tell a contributor "please send your PR
> > against 3.4", push-button merge it, update a local clone, merge from
> > 3.4 to default, do the usual stuff, commit, and then push;
>
> Is it common for a patch that should apply to multiple branches to fail
> on some but not all of them?
>

Going from 3.4 -> 3.5 is almost always clean sans NEWS, but from 2.7 it is
no where near as guaranteed.


>
> In other words, is there any reason beyond "not done yet" that submitting
> a patch (or pull request) shouldn't automatically create a patch per
> branch, with pushbuttons to test/reject/commit?
>

Assuming that you specify which branches, then not really. But if it is
blindly then yes as that's unnecessary noise and could lead to arguments
over whether something should (not) be applied to some specific version.


>
> > Our code review tool is a fork that probably should be
> > replaced as only Martin von Loewis can maintain it.
>
> Only he knows the innards, or only he is authorized, or only he knows
> where the code currently is/how to deploy an update?
>

Innards.

-Brett


>
> I know that there were times in the (not-so-recent) past when I had
> time and willingness to help with some part of the infrastructure, but
> didn't know where the code was, and didn't feel right making a blind
> offer.
>
>
> -jJ
>
> --
>
> If there are still threading problems with my replies, please
> email me with details, so that I can try to resolve them.  -jJ
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python compile error on mac os x

2014-12-10 Thread Brett Cannon
It would be better to file a bug at bugs.python.org so it's easier to track
the problem.

On Wed Dec 10 2014 at 11:37:30 AM 卓一抗  wrote:

> hello, everybody ,i occur an ld error in my mac os x
>
> python 3.4.2  gcc 4.8.2
>
> /Applications/Xcode.app/Contents/Developer/usr/bin/make Parser/pgen
> gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes  -L/usr/local/lib
> -export-dynamic  Parser/acceler.o Parser/grammar1.o Parser/listnode.o
> Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o
> Parser/firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o
> Python/dynamic_annotations.o Python/mysnprintf.o Python/pyctype.o
> Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/parsetok_pgen.o
> Parser/pgenmain.o -ldl  -framework CoreFoundation -o Parser/pgen
> ld: unknown option: -export-dynamic
> collect2: error: ld returned 1 exit status
> make[1]: *** [Parser/pgen] Error 1
> make: *** [Include/graminit.h] Error 2
>
>
> how to solve this ?  anybody help me ?
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-11 Thread Brett Cannon
As I didn't hear any objections, I'm officially stating that I expect
initial draft PEPs to be in by February 1 to know who is in the running to
focus discussion. I then expect complete PEPs by April 1 so I can read them
before PyCon and have informed discussions while I'm there. I will then
plan to make a final decision by May 1 so that we can try to have the
changes ready for Python 3.6 development (currently scheduled for Sep 2015).

On Fri Dec 05 2014 at 3:04:48 PM Brett Cannon  wrote:

> This is a bit long as I espoused as if this was a blog post to try and
> give background info on my thinking, etc. The TL;DR folks should start at
> the "Ideal Scenario" section and read to the end.
>
> P.S.: This is in Markdown and I have put it up at
> https://gist.github.com/brettcannon/a9c9a5989dc383ed73b4 if you want a
> nicer formatted version for reading.
>
> # History lesson
> Since I signed up for the python-dev mailing list way back in June 2002,
> there seems to be a cycle where we as a group come to a realization that
> our current software development process has not kept up with modern
> practices and could stand for an update. For me this was first shown when
> we moved from SourceForge to our own infrastructure, then again when we
> moved from Subversion to Mercurial (I led both of these initiatives, so
> it's somewhat a tradition/curse I find myself in this position yet again).
> And so we again find ourselves at the point of realizing that we are not
> keeping up with current practices and thus need to evaluate how we can
> improve our situation.
>
> # Where we are now
> Now it should be realized that we have to sets of users of our development
> process: contributors and core developers (the latter whom can play both
> roles). If you take a rough outline of our current, recommended process it
> goes something like this:
>
> 1. Contributor clones a repository from hg.python.org
> 2. Contributor makes desired changes
> 3. Contributor generates a patch
> 4. Contributor creates account on bugs.python.org and signs the
>[contributor agreement](
> https://www.python.org/psf/contrib/contrib-form/)
> 4. Contributor creates an issue on bugs.python.org (if one does not
> already exist) and uploads a patch
> 5. Core developer evaluates patch, possibly leaving comments through our
> [custom version of Rietveld](http://bugs.python.org/review/)
> 6. Contributor revises patch based on feedback and uploads new patch
> 7. Core developer downloads patch and applies it to a clean clone
> 8. Core developer runs the tests
> 9. Core developer does one last `hg pull -u` and then commits the changes
> to various branches
>
> I think we can all agree it works to some extent, but isn't exactly
> smooth. There are multiple steps in there -- in full or partially -- that
> can be automated. There is room to improve everyone's lives.
>
> And we can't forget the people who help keep all of this running as well.
> There are those that manage the SSH keys, the issue tracker, the review
> tool, hg.python.org, and the email system that let's use know when stuff
> happens on any of these other systems. The impact on them needs to also be
> considered.
>
> ## Contributors
> I see two scenarios for contributors to optimize for. There's the simple
> spelling mistake patches and then there's the code change patches. The
> former is the kind of thing that you can do in a browser without much
> effort and should be a no-brainer commit/reject decision for a core
> developer. This is what the GitHub/Bitbucket camps have been promoting
> their solution for solving while leaving the cpython repo alone.
> Unfortunately the bulk of our documentation is in the Doc/ directory of
> cpython. While it's nice to think about moving the devguide, peps, and even
> breaking out the tutorial to repos hosting on Bitbucket/GitHub, everything
> else is in Doc/ (language reference, howtos, stdlib, C API, etc.). So
> unless we want to completely break all of Doc/ out of the cpython repo and
> have core developers willing to edit two separate repos when making changes
> that impact code **and** docs, moving only a subset of docs feels like a
> band-aid solution that ignores the big, white elephant in the room: the
> cpython repo, where a bulk of patches are targeting.
>
> For the code change patches, contributors need an easy way to get a hold
> of the code and get their changes to the core developers. After that it's
> things like letting contributors knowing that their patch doesn't apply
> cleanly, doesn't pass tests, etc. As of right now getting the patch into
> the issue tracker is a bit manual but nothing crazy. The real issue in this
> scenario is cor

Re: [Python-Dev] My thinking about the development process

2014-12-11 Thread Brett Cannon
Just adapt your current PEP.

On Thu Dec 11 2014 at 10:02:23 AM Donald Stufft  wrote:

>
> On Dec 11, 2014, at 9:59 AM, Brett Cannon  wrote:
>
> As I didn't hear any objections, I'm officially stating that I expect
> initial draft PEPs to be in by February 1 to know who is in the running to
> focus discussion. I then expect complete PEPs by April 1 so I can read them
> before PyCon and have informed discussions while I'm there. I will then
> plan to make a final decision by May 1 so that we can try to have the
> changes ready for Python 3.6 development (currently scheduled for Sep 2015).
>
>
> Is it OK to adapt my current PEP or should I create a whole new one?
>
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition

2014-12-11 Thread Brett Cannon
On Thu Dec 11 2014 at 3:14:42 PM Dan Stromberg  wrote:

> On Thu, Dec 11, 2014 at 11:35 AM, Mark Roberts  wrote:
> > I disagree. I know there's a huge focus on The Big Libraries (and
> wholesale
> > migration is all but impossible without them), but the long tail of
> > libraries is still incredibly important. It's like saying that migrating
> the
> > top 10 Perl libraries to Perl 6 would allow people to completely ignore
> all
> > of CPAN. It just doesn't make sense.
>
> Things in the Python 2.x vs 3.x world aren't that bad.
>
> See:
> https://python3wos.appspot.com/ and
> https://wiki.python.org/moin/PortingPythonToPy3k
> http://stromberg.dnsalias.org/~strombrg/Intro-to-Python/ (writing code
> to run on 2.x and 3.x)
>
> I believe just about everything I've written over the last few years
> either ran on 2.x and 3.x unmodified, or ran on 3.x alone.  If you go
> the former route, you don't need to wait for your libraries to be
> updated.
>
> I usually run pylint twice for my projects (after each change, prior
> to checkin), once with a 2.x interpreter, and once with a 3.x
> interpreter (using
> http://stromberg.dnsalias.org/svn/this-pylint/trunk/this-pylint) , but
> I gather pylint has the option of running on a 2.x interpreter and
> warning about anything that wouldn't work on 3.x.
>

Pylint 1.4 has a --py3k flag to run only checks related to Python 3
compatibility under Python 2.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2/3 porting HOWTO has been updated

2014-12-12 Thread Brett Cannon
I have now addressed Nick's comments and backported to Python 2.7.

On Sat Dec 06 2014 at 8:40:24 AM Brett Cannon  wrote:

> Thanks for the feedback. I'll update the doc probably on Friday.
>
> On Sat Dec 06 2014 at 12:41:54 AM Nick Coghlan  wrote:
>
>> On 6 December 2014 at 14:40, Nick Coghlan  wrote:
>> > On 6 December 2014 at 10:44, Benjamin Peterson 
>> wrote:
>> >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote:
>> >>> Do we need to update it? Can it just redirect to the 3 version?
>> >>
>> >> Technically, yes, of course. However, that would unexpected take you
>> out
>> >> of the Python 2 docs "context". Also, that doesn't solve the problem
>> for
>> >> the downloadable versions of the docs.
>> >
>> > As Benjamin says, we'll likely want to update the Python 2 version
>> > eventually for the benefit of the downloadable version of the docs,
>> > but Brett's also right it makes sense to wait for feedback on the
>> > Python 3 version and then backport the most up to date text wholesale.
>> >
>> > In terms of the text itself, this is a great update Brett - thanks!
>> >
>> > A couple of specific notes:
>> >
>> > * http://python-future.org/compatible_idioms.html is my favourite
>> > short list of "What are the specific Python 2 only habits that I need
>> > to unlearn in order to start writing 2/3 compatible code?". It could
>> > be worth mentioning in addition to the What's New documents and the
>> > full Python 3 Porting book.
>> >
>> > * it's potentially worth explicitly noting the "bytes(index_value)"
>> > and "str(bytes_value)" traps when discussing the bytes/text changes.
>> > Those do rather different things in Python 2 & 3, but won't emit an
>> > error or warning in either version.
>>
>> Given that 3.4 and 2.7.9 will be the first exposure some users will
>> have had to pip, would it perhaps be worth explicitly mentioning the
>> "pip install " commands for the various tools? At least pylint's
>> PyPI page only gives the manual download instructions, including which
>> dependencies you will need to install.
>>
>> Cheers,
>> Nick.
>>
>> --
>> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition

2014-12-16 Thread Brett Cannon
Mark, your tone is no longer constructive and is hurting your case in
arguing for anything. Please take it down a notch.

On Tue Dec 16 2014 at 1:48:59 PM Mark Roberts  wrote:

> On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou 
> wrote:
>>
>> Iterating accross a dictionary doesn't need compatibility shims. It's
>> dead simple in all Python versions:
>>
>> $ python2
>> Python 2.7.8 (default, Oct 20 2014, 15:05:19)
>> [GCC 4.9.1] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> d = {'a': 1}
>> >>> for k in d: print(k)
>> ...
>> a
>>
>> $ python3
>> Python 3.4.2 (default, Oct  8 2014, 13:08:17)
>> [GCC 4.9.1] on linux
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> d = {'a': 1}
>> >>> for k in d: print(k)
>> ...
>> a
>>
>> Besides, using iteritems() and friends is generally a premature
>> optimization, unless you know you'll have very large containers.
>> Creating a list is cheap.
>>
>
> It seems to me that every time I hear this, the author is basically
> admitting that Python is a toy language not meant for "serious computing"
> (where serious is defined in extremely modest terms). The advice is also
> very contradictory to literally every talk on performant Python that I've
> seen at PyCon or PyData or ... well, anywhere. And really, doesn't it
> strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of
> Python 3 a "premature optimization"? Isn't the whole reason that the
> default behavior switch was made is because creating lists willy nilly all
> over the place really *ISN'T* cheap? This isn't the first time someone has
> tried to run this line past me, but it's the first time I've been fed up
> enough with the topic to call it complete BS on the spot. Please help me
> stop the community at large from saying this, because it really isn't true
> at all.
>
> -Mark
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition

2014-12-16 Thread Brett Cannon
On Tue Dec 16 2014 at 2:05:28 PM Mark Roberts  wrote:

> Perhaps you are correct, and I will attempt to remain more constructive on
> the topic (despite it being an *incredibly* frustrating experience).
> However, my point remains: this is a patently false thing that is being
> parroted throughout the Python community, and it's outright insulting to be
> told my complaints about writing 2/3 compatible code are invalid on the
> basis of "premature optimization".
>

See, you're still using a very negative tone even after saying you would
try to scale it back. What Antoine said is not patently false and all he
said was that relying on iter*() methods on dicts is typically a premature
optimization for Python 2 code which is totally reasonable for him to say
and was done so in a calm tone. He didn't say "you are prematurely
optimizing and you need to stop telling the community that because you're
wasting everyone's time in caring about performance!" which how I would
expect you to state it if you were make the same claim based on how you
have been reacting.

For most use cases, you simply don't need a memory-efficient iterator. If
you have a large dict where memory issues from constructing a list comes
into play, then yes you should use iterkeys(), but otherwise the overhead
of temporarily constructing a list to hold all the keys is cheap since it's
just a list of pointers at the C level.

As for the changing of the default in Python 3, that's because we decided
to make iterators the default everywhere. And that was mostly for
consistency, not performance reasons. It was also for flexibility as you
can go from an iterator to a list by just wrapping the iterator in list(),
but you can't go the other way around. At no time did anyone go "we really
need to change the default iterator for dicts to a memory-saving iterator
because people are wasting memory and having issues with memory pressure
all the time"; it was always about consistency and using the best idiom
that had developed over the years.

So Antoine's point is entirely reasonable and valid and right.

-Brett


>
> -Mark
>
> On Tue, Dec 16, 2014 at 10:57 AM, Brett Cannon  wrote:
>>
>> Mark, your tone is no longer constructive and is hurting your case in
>> arguing for anything. Please take it down a notch.
>>
>> On Tue Dec 16 2014 at 1:48:59 PM Mark Roberts  wrote:
>>
>>> On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou 
>>> wrote:
>>>>
>>>> Iterating accross a dictionary doesn't need compatibility shims. It's
>>>> dead simple in all Python versions:
>>>>
>>>> $ python2
>>>> Python 2.7.8 (default, Oct 20 2014, 15:05:19)
>>>> [GCC 4.9.1] on linux2
>>>> Type "help", "copyright", "credits" or "license" for more information.
>>>> >>> d = {'a': 1}
>>>> >>> for k in d: print(k)
>>>> ...
>>>> a
>>>>
>>>> $ python3
>>>> Python 3.4.2 (default, Oct  8 2014, 13:08:17)
>>>> [GCC 4.9.1] on linux
>>>> Type "help", "copyright", "credits" or "license" for more information.
>>>> >>> d = {'a': 1}
>>>> >>> for k in d: print(k)
>>>> ...
>>>> a
>>>>
>>>> Besides, using iteritems() and friends is generally a premature
>>>> optimization, unless you know you'll have very large containers.
>>>> Creating a list is cheap.
>>>>
>>>
>>> It seems to me that every time I hear this, the author is basically
>>> admitting that Python is a toy language not meant for "serious computing"
>>> (where serious is defined in extremely modest terms). The advice is also
>>> very contradictory to literally every talk on performant Python that I've
>>> seen at PyCon or PyData or ... well, anywhere. And really, doesn't it
>>> strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of
>>> Python 3 a "premature optimization"? Isn't the whole reason that the
>>> default behavior switch was made is because creating lists willy nilly all
>>> over the place really *ISN'T* cheap? This isn't the first time someone has
>>> tried to run this line past me, but it's the first time I've been fed up
>>> enough with the topic to call it complete BS on the spot. Please help me
>>> stop the community at large from saying this, because it really isn't true
>>> at all.
>>>
>>> -Mark
>>> ___
>>> Python-Dev mailing list
>>> Python-Dev@python.org
>>> https://mail.python.org/mailman/listinfo/python-dev
>>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
>>> brett%40python.org
>>>
>>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.4.2/ PyGame Registry

2014-12-16 Thread Brett Cannon
This mailing list is for the development OF Python, not its use. You should
be able to get help on the python-tutor or Python - list mailing lists.

On Tue, Dec 16, 2014, 16:42 Matthew Braun  wrote:

> Good Morning,
> I installed Python 3.4.2 on my work computer.  I was looking at the book
> "Head First Programming" which references download PYGAME. I downloaded
> what I believe to be the correct version and it tells me that I don't see
> the installer. I look in the registry and there is no:
>
> *HKEY_CURRENT_USER\Software\Python\*
>
> Did I do something wrong? This is all new to me. Any help would be greatly
> appreciated.
> Thanks Matt
>
> [image: Inline image 1]
>
>
>  ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Overriding stdlib http package

2015-01-14 Thread Brett Cannon
On Wed Jan 14 2015 at 4:08:52 PM Demian Brecht 
wrote:

> On 2015-01-14 12:25 PM, Guido van Rossum wrote:
> > I'm not sure how commit privileges would help you -- can't you just fork
> > the CPython (I'm sure there's already a Bitbucket mirror that you can
> fork
> > easily) and do your work there? Even with commit privileges you wouldn't
> be
> > committing partial work unreviewed.
>
> The friendly module fork allows for others to easily (or at least the
> intention is to do it easily) use the module with the new, backwards
> compatible features as a drop in replacement for the stdlib module.
>

But as Guido pointed out, we _like_ it being difficult to do because we
don't want this kind of substitution happening as code ends up depending on
bugs and quirks that you may fix.


> Giving others the ability to do this would lend itself to the adoption
> of the module and bug reports and such before upstream patches are
> produced.
>
> That said, the main downside to the friendly fork is the patch
> submission process: After changes have been merged to the fork, there's
> bound to be churn during the upstream patch submission, which would
> likely lead to something that looks like:
>
> > Implement feature/bug fix [1]
> > Commit changes to httlib3
> > Generate patch for CPython
> > Import patch to local CPython
> > Run unit tests [1]
> > Generate hg patch (patchA) for submission to bug tracker
> > Upload patchA
> > patchA is reviewed
> > Implement review changes and generate patchB [1]
> > Upload patchB
> > [...wait for merge...]
> > Merge delta of patchB and patchA to httplib3
> > Test/upload new PyPI package
>
> I see commit privileges helping in two ways:
>
> 1. I've experienced lag on a few occasions between review and merge. I'm
> assuming that this is largely due to a lack of dotted line maintainer of
> the http package (although I believe that the general consensus is that
> Senthil is the de facto maintainer of the package). Commit privileges
> would help in getting the patches merged once reviews are complete.
>
> 2. It would help my own workflow. While feature development can be done
> in httplib3, I do also tend to swap between issues in the bug tracker
> and large feature work. Because I have two lines of work (CPython/bug
> tracker and Github), I run into issues around where these changes should
> be made: Should the bug fixes live in CPython/bug tracker or should I
> fix the issue in httplib3 and go through the submission workflow above?
> Either way, I'm signing myself up for a good deal of headache managing
> the httplib3 work, especially when development work across feature
> branches is dependent on patches submitted to CPython.
>
>
> I definitely don't mind the extra work if there are no other options,
> but my end goal is to be a maintainer of the http package and core
> developer, not to maintain a third party fork.
>

How many other modules are dependent on the http module in the stdlib that
are going to be affected by your changes? One option is you fork http
**and** and modules in the stdlib that are dependent on it. You don't
really have to change the other modules beyond their import statement of
using http -- you can even do `import http3 as http` or something to
minimize the changes -- but you at least don't have to monkeypatch
sys.modules for others to gain from your http changes. Plus as you patch
stuff in http you may find you have/want to patch other dependent modules
as well and so you will have already done that.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Overriding stdlib http package

2015-01-15 Thread Brett Cannon
On Wed Jan 14 2015 at 4:58:20 PM Demian Brecht 
wrote:

> On 2015-01-14 1:19 PM, Brett Cannon wrote:
> > But as Guido pointed out, we _like_ it being difficult to do because we
> > don't want this kind of substitution happening as code ends up depending
> on
> > bugs and quirks that you may fix.
>
> I can understand the reasoning.
>
> > How many other modules are dependent on the http module in the stdlib
> that
> > are going to be affected by your changes? One option is you fork http
> > **and** and modules in the stdlib that are dependent on it. You don't
> > really have to change the other modules beyond their import statement of
> > using http -- you can even do `import http3 as http` or something to
> > minimize the changes -- but you at least don't have to monkeypatch
> > sys.modules for others to gain from your http changes. Plus as you patch
> > stuff in http you may find you have/want to patch other dependent modules
> > as well and so you will have already done that.
>
> It looks like there are 5 other modules dependent on the http package.
> If I understand what you're proposing, it pretty much defeats the
> purpose of what I'm trying to accomplish with a standalone httplib3
> package.
>
> That said, considering the points that you and Guido have both made, I
> think that the best course of action is to either just fork CPython as a
> whole or to continue with httplib3 but abandon overriding sys.modules,
> develop features detached from the stdlib and worry about fixing
> dependencies when integrating changes upstream.
>

If I were you I would fork and then for bugfixes send them upstream to us
while you develop API additions independently. That way if your fork gains
traction you can come to us and say "my fork has a stable API, has existed
for (at least) a year, and the community seems to have rallied behind it",
at which point we can look at drawing it in. And if you fix enough bugs we
might make you maintainer anyway while you work out API design with the
community outside of the stdlib.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Brett Cannon
This is a mailing to discuss the development *of* Python, not its *use*.
You should be able to get help from python-list or #python on IRC.

On Tue Jan 20 2015 at 9:44:48 AM Neil Girdhar  wrote:

> I get error:
>
> TypeError: init_builtin() takes exactly 1 argument (0 given)
>
> The only source file that can generate that error
> is Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:
>
> PyErr_Format(PyExc_TypeError,
>  "call takes exactly %d arguments XYZABC (%zd given)",
>  inargs_index, actual_args);
>
> I do not see any difference after make clean and a full rebuild.  How is
> this possible?  I need to debug the arguments passed.
>
> Best,
>
> Neil
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Brett Cannon
On Tue Jan 20 2015 at 9:53:52 AM Benjamin Peterson 
wrote:

>
>
> On Tue, Jan 20, 2015, at 09:51, Brett Cannon wrote:
> > This is a mailing to discuss the development *of* Python, not its *use*.
> > You should be able to get help from python-list or #python on IRC.
>
> To be fair, he's asking to debug his patch in
> https://bugs.python.org/issue2292


Ah, sorry about that. The issue wasn't referenced in the email.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Undefined dlopen When Building Module On Android

2015-01-22 Thread Brett Cannon
A mobile SIG is being formed, but it doesn't have a mailing list yet, else
that would be a good place to ask this question.

On Wed Jan 21 2015 at 5:54:39 PM Guido van Rossum  wrote:

> Maybe try a list focused on Android development? Few people in the Python
> core development community have any Android experience. But the issues and
> context you offer seem to be related to the Android world, not to Python.
> (dlopen is used by a lot of systems, not just Python.)
>
> On Wed, Jan 21, 2015 at 2:43 PM, Cyd Haselton  wrote:
>
>> On Mon, Jan 19, 2015 at 5:23 PM, Cyd Haselton 
>> wrote:
>> > On Mon, Jan 19, 2015 at 8:51 AM, Cyd Haselton 
>> wrote:
>> >> Hello,
>> >> I'm struggling with a build issue on Android; I've posted to the
>> >> general python list with no result, so I'm re-posting here in hopes
>> >> that someone can help.  If this is the wrong place feel free to let me
>> >> know.
>> >>
>> >> I'm attempting to build Python 2.7.8 on my Android device; I'm using
>> >> an environment that simulates a Linux filesystem within the Android
>> >> terminal using a port of fakechroot.  Within that environment I've
>> >> ported and/or bootstrapped a number of Linux utilities (curl, git,
>> >> openssl, gcc)
>> >>
>> >> I run ./configure, then make, and the executable and library are
>> >> built.  The problem occurs when build_ext is run; the newly built
>> >> python executable builds, then links _struct, and immediately
>> >> afterwards I get an 'undefined reference to dlopen' error.
>> >>
>> >> If I run ./python setup.py --verbose -library-dirs /path/to/lib
>> >> --libraries='c dl m' -f, the 'undefined reference to dlopen' error is
>> >> thrown again.
>> >>
>> >> If I run ./python setup.py --verbose -library-dirs /path/to/lib
>> >> --libraries='-lc -ldl -lm' -f the build continues past _struct...even
>> >> though ld throws the expected 'unable to find -l-lc' and other errors.
>> >>
>> >> Let me know if you need me to provide additional information.  Any
>> >> help would be greatly appreciated.
>> >>
>> >> Cyd
>> >
>> >
>> > Additionally I took a strace of the error producing command. The
>> > following is (hopefully) a relevant portion minus the various 'no such
>> > file' lines before the correct lib is found (which it always is)
>> >
>> > 16513
>> open("/data/data/jackpal.androidterm/kbox2/bld/python/Python-2.7.8/Lib/distutils/unixccompiler.py",
>> > O_RDONLY|O_LARGEFILE) = 3   16513
>> >
>> open("/data/data/jackpal.androidterm/kbox2/bld/python/Python-2.7.8/Lib/distutils/unixccompiler.pyc",
>> > O_RDONLY|O_LARGEFILE) = 4   16513 vfork()
>> >  = 16525
>> > 16513 wait4(16525,  
>> > 16525 open("/data/data/jackpal.androidterm/kbox2/bin/sh",
>> > O_RDONLY|O_LARGEFILE) = 3
>> > 16525 execve("/data/data/jackpal.androidterm/kbox2/bin/sh", ["sh",
>> > "-c", "gcc --sysroot=/usr/gcc-4.9.2/sysroot -print-multiarch >
>> > build/temp.linux-armv7l-2.7/multiarch 2> /dev/null"], [/* 58 vars */])
>> > = 0
>> >
>> > *snip call to libc intercepted by libfakechroot*
>> >
>> > 16525 open("/system/lib/libc.so", O_RDONLY|O_LARGEFILE|0x8) = 3
>> > 16525 open("/system/lib/libm.so", O_RDONLY|O_LARGEFILE|0x8) = 3
>> > 16525 open("/dev/__properties__",
>> > O_RDONLY|O_LARGEFILE|O_NOFOLLOW|0x8) = 3
>> > 16525 open("build/temp.linux-armv7l-2.7/multiarch",
>> > O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
>> > 16525 open("/data/data/jackpal.androidterm/kbox2/dev/null",
>> > O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
>> > 16525 fork()= 16526
>> > 16525 wait4(-1,  
>> > 16526 open("/acct/uid/10186/tasks", O_RDWR|O_CREAT|O_LARGEFILE, 0666)
>> > = -1 EACCES (Permission denied)
>> >
>> > See attached for remainder
>>
>> Should I be posting this issue elsewhere?
>> Is more information needed?
>>
> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>  ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python Be Built Without Distutils

2015-01-24 Thread Brett Cannon
On Fri Jan 23 2015 at 5:45:28 PM Gregory P. Smith  wrote:

> On Fri Jan 23 2015 at 11:20:02 AM M.-A. Lemburg  wrote:
>
>> On 23.01.2015 19:48, Matthias Klose wrote:
>> > On 01/23/2015 06:30 PM, Cyd Haselton wrote:
>> >> Related to my earlier question regarding building Python on Android
>> >> and an undefined reference to dlopen error...I have the following
>> >> question:  Is it possible to build and install Python without having
>> >> to build and install...or use...distutils?
>> >>
>> >> Some background:
>> >> I can build the python interpreter on my device, and I can build a
>> >> bunch of modules.  The problem appears when make reaches the part
>> >> where setup.py is used to build and import modules...specifically when
>> >> setup.py attempts to import distutils.core.
>> >
>> > you can do this using Setup.local. This works for me building additional
>> > extensions as builtins.  It might require some tweaking to build
>> everything.
>>
>> You may want to have a look at the Setup files we're using
>> in eGenix PyRun, which uses them to force static builds of the
>> various built-in extensions.
>>
>> Look for these files:
>>
>> PyRun/Runtime/Setup.PyRun-2.7
>> PyRun/Runtime/Setup.PyRun-3.4
>>
>> in the source archives:
>>
>> http://www.egenix.com/products/python/PyRun/
>>
>> > Otoh, I would like to get rid off the setup.py altogether (/me ducks
>> ...).
>>
>> Why ? It's great for finding stuff on your system and configuring
>> everything without user intervention (well, most of the time :-)).
>>
>
> Because our setup.py is a nightmare of arbitrary code run in a linear
> fashion with ad-hoc checks for things that are unlike how any other project
> on the planet determines what is available on your system.  It may have
> seemed "great" when it was created in 2001.  It really shows its age now.
>
> It defeats build parallelism and dependency declaration.
> It also prevents cross compilation.
>
> Building an interpreter with a limited standard library on your build host
> so that you can run said interpreter to have it drive the remainder of your
> build is way more self hosting that we rightfully need to be for CPython.
>

So are you suggesting to add the build rules to configure and the Makefile
-- and Windows build file -- in order to drop setup.py?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] also

2015-01-28 Thread Brett Cannon
You have the wrong mailing list for this sort of request. This is list is
about the development *of* Python, not *with* it. And since Python the
language is not in the business of providing libraries for such specific
needs this kind of request isn't appropriate here. You can try asking
somewhere like python-list to see if a library already exists for your
needs, though.

On Wed Jan 28 2015 at 10:13:03 AM Alan Armour  wrote:

> if you can do this
>
> a chemical physics and element physics like everything from melting points
> to how much heat you need to add two chemicals together
>
> and physics like aerodynamics, space dynamics, and hydrodynamics etcetera
> for propellers and motors and stuff.
>
> just having this in a main language seems to make a shit ton of sense.
>
>
> Just like all the physics you can think of from electrical equipment to
> building microchips to oscillators and resistors and stuff like that.
>
> thanks
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Workflow PEP proposals are now closed

2015-02-02 Thread Brett Cannon
The PEPs under consideration are PEPs 474
 and 462
 from Nick Coghlan to use
Kallithea and do self-hosting, and PEP 481
 from Donald Stufft that
proposes using GitHub.

At this point I expect final PEPs by PyCon US so I can try and make a
decision by May 1. Longer still is to hopefully have whatever solution we
choose in place right after Python 3.5 is released.

And just a reminder to people, the lofty goal is to improve the overall
workflow for CPython itself such that our patch submission queue can
actually be cleared regularly. This not only benefits core devs by letting
us be more effective, but also contributors by making sure their hard work
gets addressed quickly and thus doesn't languish on the issue tracker for
very long.

If we can't find a solution for fixing our CPython workflow I will then be
willing to entertain these PEPs narrowing their scopes and only focus on
ancillary repos like the devguide, etc. where the workflows are simple.

I know the absolute worst case is nothing changes, but honestly I think the
worst case is Nick's work gets us off of Rietveld, the ancillary repos move
to GitHub, and we make the GitHub and Bitbucket mirrors of CPython official
ones for people to work from (bonus points if we get the issue tracker to
have push button patch pulling from GitHub; Bitbucket is already covered
thanks to our remote hg repo support). IOW I see nothing but a win for
contributors and core devs as well as everyone proposing solutions which is
a nice place to start from. =)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Workflow improvement PEPs 474 & 462 updated

2015-02-02 Thread Brett Cannon
On Mon Feb 02 2015 at 9:52:29 AM Pierre-Yves David <
pierre-yves.da...@ens-lyon.org> wrote:

>
>
> On 02/02/2015 01:11 AM, Nick Coghlan wrote:
> >
> > On 2 Feb 2015 04:56, "francis"  > > wrote:
> [SNIP]
> >  > PS: Should this be forwarded to python-workflow or is that other list
> to
> >  > be considered obsolete?
> >
> > I withdrew from participating in that list when an individual banned
> > from the core mailing lists and the issue tracker for persistently
> > failing to respect other participants in those communities chose to
> > participate in it despite an explicit request from me that he refrain
> > from doing so (after wasting years trying to coach him into more
> > productive modes of engagement, I now just cut my losses and flat out
> > refuse to work in any environment where he has a significant presence).
> >
> > Since our moderation practices don't currently include a way to request
> > transferring bans between lists, and I'm reluctant to push for that to
> > change when I have such a clear personal stake in the outcome (it reads
> > like a personal vendetta against the individual concerned), that's the
> > way things are likely to stay unless/until he also gets himself banned
> > from the core workflow list.
>
> Without emitting any judgment on your decision, I'm deeply sad that this
> list have been "abandoned".
>

Others do participate there so the list is not dead or abandoned, simply
that Nick is not participating on that list.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Workflow PEP proposals are now closed

2015-02-02 Thread Brett Cannon
On Mon Feb 02 2015 at 10:00:30 AM Donald Stufft  wrote:

>
> On Feb 2, 2015, at 9:35 AM, Brett Cannon  wrote:
>
> The PEPs under consideration are PEPs 474
> <https://www.python.org/dev/peps/pep-0474/> and 462
> <https://www.python.org/dev/peps/pep-0462/> from Nick Coghlan to use
> Kallithea and do self-hosting, and PEP 481
> <https://www.python.org/dev/peps/pep-0481/> from Donald Stufft that
> proposes using GitHub.
>
> At this point I expect final PEPs by PyCon US so I can try and make a
> decision by May 1. Longer still is to hopefully have whatever solution we
> choose in place right after Python 3.5 is released.
>
> And just a reminder to people, the lofty goal is to improve the overall
> workflow for CPython itself such that our patch submission queue can
> actually be cleared regularly. This not only benefits core devs by letting
> us be more effective, but also contributors by making sure their hard work
> gets addressed quickly and thus doesn't languish on the issue tracker for
> very long.
>
> If we can't find a solution for fixing our CPython workflow I will then be
> willing to entertain these PEPs narrowing their scopes and only focus on
> ancillary repos like the devguide, etc. where the workflows are simple.
>
> I know the absolute worst case is nothing changes, but honestly I think
> the worst case is Nick's work gets us off of Rietveld, the ancillary repos
> move to GitHub, and we make the GitHub and Bitbucket mirrors of CPython
> official ones for people to work from (bonus points if we get the issue
> tracker to have push button patch pulling from GitHub; Bitbucket is already
> covered thanks to our remote hg repo support). IOW I see nothing but a win
> for contributors and core devs as well as everyone proposing solutions
> which is a nice place to start from. =)
>
>
>
> Is there going to be discussion between the two approaches or should the
> PEPs themselves address each other?
>

Since PEPs are meant to act as a record of what was discussed on a topic
then it probably wouldn't hurt to incorporate why your approach is better
than the other one in the PEP itself. We can obviously talk openly here
when you feel the PEP is ready for it.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Workflow PEP proposals are now closed

2015-02-02 Thread Brett Cannon
On Mon Feb 02 2015 at 2:40:21 PM Antoine Pitrou  wrote:

>
> Hi,
>
> What does "closed" mean in this context?
>

No new PEPs on this topic will be taken under consideration, so submissions
are now "closed" to new participants.

-Brett


>
> Regards
>
> Antoine.
>
>
>
> On Mon, 02 Feb 2015 14:35:47 +
> Brett Cannon  wrote:
> > The PEPs under consideration are PEPs 474
> > <https://www.python.org/dev/peps/pep-0474/> and 462
> > <https://www.python.org/dev/peps/pep-0462/> from Nick Coghlan to use
> > Kallithea and do self-hosting, and PEP 481
> > <https://www.python.org/dev/peps/pep-0481/> from Donald Stufft that
> > proposes using GitHub.
> >
> > At this point I expect final PEPs by PyCon US so I can try and make a
> > decision by May 1. Longer still is to hopefully have whatever solution we
> > choose in place right after Python 3.5 is released.
> >
> > And just a reminder to people, the lofty goal is to improve the overall
> > workflow for CPython itself such that our patch submission queue can
> > actually be cleared regularly. This not only benefits core devs by
> letting
> > us be more effective, but also contributors by making sure their hard
> work
> > gets addressed quickly and thus doesn't languish on the issue tracker for
> > very long.
> >
> > If we can't find a solution for fixing our CPython workflow I will then
> be
> > willing to entertain these PEPs narrowing their scopes and only focus on
> > ancillary repos like the devguide, etc. where the workflows are simple.
> >
> > I know the absolute worst case is nothing changes, but honestly I think
> the
> > worst case is Nick's work gets us off of Rietveld, the ancillary repos
> move
> > to GitHub, and we make the GitHub and Bitbucket mirrors of CPython
> official
> > ones for people to work from (bonus points if we get the issue tracker to
> > have push button patch pulling from GitHub; Bitbucket is already covered
> > thanks to our remote hg repo support). IOW I see nothing but a win for
> > contributors and core devs as well as everyone proposing solutions which
> is
> > a nice place to start from. =)
> >
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problem running ./python -m test -v test_whatever

2015-02-11 Thread Brett Cannon
You might want to try asking on python-l...@python.org to get a wider
audience as you might find a fellow AIX user there who can help you out.

On Wed Feb 11 2015 at 12:29:56 AM Dwight  wrote:

>  Hi,
> I am primarily a user; but since I can not get a newer version
> of firefox for my system I have begun the very long process of
> trying to build a newer version of firefox for my system.
> I am using an IBM pSeries system running AIX 7.1.
> I am using gcc and IBM ld.
> All the modules I have built are being installed in a directory
> called /opt/alinux.  A lot of linux routines are stored in a directory
> called /opt/freeware and of course IBM has some version of their
> own which are installed in /usr...  Currently there is only one thing
> installed in /usr/local and that is clamscan.
> I have built and installed the tcl.8.6.3, tkl.8.6.3 and python 2.7.9
> into
> /opt/aluinux.
> I am now trying to build and install python 3.4.2.  So far I
> have found a way to compile python successfully.  There are
> only three features missing (_sqlite3, ossaudiodev and spwd).
> The configure command I ran was:
> ./configure --prefix=/opt/alinux --exec-prefix=/opt/alinux
> --enable-shared  --with-system-ffi --enable-ipv6 \
> --with-tcltk-includes='-I/opt/alinux/include'
> --with-tcltk-libs='-L/opt/alinux/lib' | tee MYconfig.log
>
> After running gmake test I found:
> Ran 509 tests in 47.407s
> FAILED (errors=2, skipped=8)
> Ran 49 tests in 0.065s
> FAILED (failures=2, skipped=1)
> Ran 34 tests in 0.320s
> FAILED (errors=2, skipped=6)
> Ran 80 tests in 1.040s
> FAILED (errors=2, skipped=20)
> Ran 10 tests in 0.366s
> FAILED (failures=1, skipped=2)
> Ran 506 tests in 28.860s
> FAILED (failures=6, errors=5, skipped=83)
> Ran 97 tests in 21.921s
> FAILED (failures=9, skipped=3)
> I then tried to run  ./python -m test -v test_whatever
> and got the following error:
> $ pwd
> /home/surfer/DownLoadLFNs/HTML/NEWS/BuildFirefox/Python-3.4.2
> $ ls -la lib*
> -rw-r--r--1 surfer   Internet   19562608 Feb 10 20:02 libpython3.4m.a
> -rwxr-xr-x1 surfer   Internet   13331408 Feb 10 20:02 libpython3.4m.so
> $ ./python -m test -v test_ssl
> exec(): 0509-036 Cannot load program ./python because of the following
> errors:
> 0509-150   Dependent module libpython3.4m.so could not be loaded.
> 0509-022 Cannot load module libpython3.4m.so.
> 0509-026 System error: A file or directory in the path name does
> not exist.
> I would really appreciate some help in determining what I am doing
> wrong.
> As I said in the beginning I am primarily a user and not a developer.
> I can solve
> some fairly simple problems; but that about it.
> I am guessing that there is some kind of linking problem; but I do not
> know
> how to solve this problem.
> I tried this:
> $
> LDFLAGS=-L/home/surfer/DownLoadLFNs/HTML/NEWS/BuildFirefox/Python-3.4.2
>
> $ export LDFLAGS
> and got the same results.
>
>
>  ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] TypeError messages

2015-02-20 Thread Brett Cannon
On Thu Feb 19 2015 at 5:52:07 PM Serhiy Storchaka 
wrote:

> Different patterns for TypeError messages are used in the stdlib:
>
>  expected X, Y found
>  expected X, found Y
>  expected X, but Y found
>  expected X instance, Y found
>  X expected, not Y
>  expect X, not Y
>  need X, Y found
>  X is required, not Y
>  Z must be X, not Y
>  Z should be X, not Y
>
> and more.
>
> What the pattern is most preferable?
>

My preference is for "expected X, but found Y".


>
> Some messages use the article before X or Y. Should the article be used
> or omitted?
>
> Some messages (only in C) truncate actual type name (%.50s, %.80s,
> %.200s, %.500s). Should type name be truncated at all and for how limit?
>

I assume this is over some worry of string size blowing out memory
allocation or something? If someone can show that's an actual worry then
fine, otherwise I say don't truncate.


> Type names newer truncated in TypeError messages raised in Python code.
>
> Some messages enclose actual type name with single quotes ('%s',
> '%.200s'). Should type name be quoted? It is uncommon if type name
> contains spaces.
>

 I agree that type names don't need to be quoted.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] TypeError messages

2015-02-20 Thread Brett Cannon
On Fri Feb 20 2015 at 10:27:35 AM Stefan Richthofer <
stefan.richtho...@gmx.de> wrote:

> Honestly, the right solution would be to have a function or macro that
> generates the TypeError messages
> from X, Y, Z arguments. (Until now I actually believed this would be
> already the case)
> - type errors would be of uniform style
> - if for some reoson the format should be changed, this can be done in one
> central place
> - if someone would want to parse the error message this would be feasible
> I suppose it should be straight forward to find error message creations in
> the source by searching for "TypeError" or something.
> Maybe this kind of cleanup could be done along with the implementation of
> PEP 484?
>

Actually PEP 473 covers standardizing error messages by introducing
keyword-only arguments which would lead to a standardized message being
generated. From the C side there can be a function provided to make it easy
to get the same result as constructing the exception with the keyword-only
argument.

-Brett


>
> -Stefan
>
> *Gesendet:* Freitag, 20. Februar 2015 um 15:05 Uhr
> *Von:* "Brett Cannon" 
> *An:* "Serhiy Storchaka" , python-dev@python.org
> *Betreff:* Re: [Python-Dev] TypeError messages
>
> On Thu Feb 19 2015 at 5:52:07 PM Serhiy Storchaka 
> wrote:
>>
>> Different patterns for TypeError messages are used in the stdlib:
>>
>>  expected X, Y found
>>  expected X, found Y
>>  expected X, but Y found
>>  expected X instance, Y found
>>  X expected, not Y
>>  expect X, not Y
>>  need X, Y found
>>  X is required, not Y
>>  Z must be X, not Y
>>  Z should be X, not Y
>>
>> and more.
>>
>> What the pattern is most preferable?
>
>
> My preference is for "expected X, but found Y".
>
>
>>
>> Some messages use the article before X or Y. Should the article be used
>> or omitted?
>>
>> Some messages (only in C) truncate actual type name (%.50s, %.80s,
>> %.200s, %.500s). Should type name be truncated at all and for how limit?
>
>
> I assume this is over some worry of string size blowing out memory
> allocation or something? If someone can show that's an actual worry then
> fine, otherwise I say don't truncate.
>
>
>> Type names newer truncated in TypeError messages raised in Python code.
>>
>> Some messages enclose actual type name with single quotes ('%s',
>> '%.200s'). Should type name be quoted? It is uncommon if type name
>> contains spaces.
>
>
>  I agree that type names don't need to be quoted.
>  ___ Python-Dev mailing list
> Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/stefan.richthofer%40gmx.de
>   ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] str(IntEnum)

2015-02-20 Thread Brett Cannon
On Fri Feb 20 2015 at 11:39:11 AM Demian Brecht 
wrote:

> While working on a bug in the issue tracker, I came across something that
> I thought was a little odd around the behaviour of IntEnum. Should the
> behaviour of an instance of an IntEnum not be symmetric to an int where
> possible? For example:
>
> >>> class MyEnum(IntEnum):
> ... FOO = 1
> ...
> >>> MyEnum.FOO == 1
> True
> >>> MyEnum.FOO * 3 == 3
> True
> >>> str(MyEnum.FOO) == str(1)
> False
>
> In my mind, the string representation here should be “1” and not the
> label. Was this simply an oversight of the specialized IntEnum
> implementation, or was there a concrete reason for this that I’m not seeing?
>

Concrete reason. The string is 'MyEnum.FOO' which is much more readable and
obvious where the value came from. The fact that it can be treated as an
int is the same as the reason True and False are subclasses of int; it made
practical sense for compatibility with what they typically replaced, but
where it made more sense to diverge and introduce new behaviour then we did
so.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update to PEP 11 to clarify garnering platform support

2015-02-20 Thread Brett Cannon
I just realized I actually never committed this change. Assuming no new
objections I'll commit this in the near future (promise this time =).

On Fri May 16 2014 at 1:51:00 PM Brett Cannon  wrote:

> Here is some proposed wording. Since it is more of a clarification of what
> it takes to garner support -- which is just a new section -- rather than a
> complete rewrite I'm including just the diff to make it easier to read the
> changes.
>
>
> *diff -r 49d18bb47ebc pep-0011.txt*
>
> *--- a/pep-0011.txt Wed May 14 11:18:22 2014 -0400*
>
> *+++ b/pep-0011.txt Fri May 16 13:48:30 2014 -0400*
>
> @@ -2,22 +2,21 @@
>
>  Title: Removing support for little used platforms
>
>  Version: $Revision$
>
>  Last-Modified: $Date$
>
> -Author: mar...@v.loewis.de (Martin von Löwis)
>
> +Author: Martin von Löwis ,
>
> +Brett Cannon 
>
>  Status: Active
>
>  Type: Process
>
>  Content-Type: text/x-rst
>
>  Created: 07-Jul-2002
>
>  Post-History: 18-Aug-2007
>
> +  16-May-2014
>
>
>
>
>
>  Abstract
>
>  
>
>
>
> -This PEP documents operating systems (platforms) which are not
>
> -supported in Python anymore.  For some of these systems,
>
> -supporting code might be still part of Python, but will be removed
>
> -in a future release - unless somebody steps forward as a volunteer
>
> -to maintain this code.
>
> +This PEP documents how an operating system (platform) garners
>
> +support in Python as well as documenting past support.
>
>
>
>
>
>  Rationale
>
> @@ -37,16 +36,53 @@
>
>  change to the Python source code will work on all supported
>
>  platforms.
>
>
>
> -To reduce this risk, this PEP proposes a procedure to remove code
>
> -for platforms with no Python users.
>
> +To reduce this risk, this PEP specifies what is required for a
>
> +platform to be considered supported by Python as well as providing a
>
> +procedure to remove code for platforms with little or no Python
>
> +users.
>
>
>
> +Supporting platforms
>
> +
>
> +
>
> +Gaining official platform support requires two things. First, a core
>
> +developer needs to volunteer to maintain platform-specific code. This
>
> +core developer can either already be a member of the Python
>
> +development team or be given contributor rights on the basis of
>
> +maintaining platform support (it is at the discretion of the Python
>
> +development team to decide if a person is ready to have such rights
>
> +even if it is just for supporting a specific platform).
>
> +
>
> +Second, a stable buildbot must be provided [2]_. This guarantees that
>
> +platform support will not be accidentally broken by a Python core
>
> +developer who does not have personal access to the platform. For a
>
> +buildbot to be considered stable it requires that the machine be
>
> +reliably up and functioning (but it is up to the Python core
>
> +developers to decide whether to promote a buildbot to being
>
> +considered stable).
>
> +
>
> +This policy does not disqualify supporting other platforms
>
> +indirectly. Patches which are not platform-specific but still done to
>
> +add platform support will be considered for inclusion. For example,
>
> +if platform-independent changes were necessary in the configure
>
> +script which was motivated to support a specific platform that would
>
> +be accepted. Patches which add platform-specific code such as the
>
> +name of a specific platform to the configure script will generally
>
> +not be accepted without the platform having official support.
>
> +
>
> +CPU architecture and compiler support are viewed in a similar manner
>
> +as platforms. For example, to consider the ARM architecture supported
>
> +a buildbot running on ARM would be required along with support from
>
> +the Python development team. In general it is not required to have
>
> +a CPU architecture run under every possible platform in order to be
>
> +considered supported.
>
>
>
>  Unsupporting platforms
>
>  --
>
>
>
> -If a certain platform that currently has special code in it is
>
> -deemed to be without Python users, a note must be posted in this
>
> -PEP that this platform is no longer actively supported.  This
>
> +If a certain platform that currently has special code in Python is
>
> +deemed to be without Python users or lacks proper support from the
>
> +Python development team and/or a buildbot, a note must be posted in
>
> +this PEP that this platform is no longer actively supported. 

Re: [Python-Dev] TypeError messages

2015-02-21 Thread Brett Cannon
On Sat Feb 21 2015 at 12:15:25 PM Antoine Pitrou 
wrote:

> On Fri, 20 Feb 2015 14:05:11 +
> Brett Cannon  wrote:
> > On Thu Feb 19 2015 at 5:52:07 PM Serhiy Storchaka 
> > wrote:
> >
> > > Different patterns for TypeError messages are used in the stdlib:
> > >
> > >  expected X, Y found
> > >  expected X, found Y
> > >  expected X, but Y found
> > >  expected X instance, Y found
> > >  X expected, not Y
> > >  expect X, not Y
> > >  need X, Y found
> > >  X is required, not Y
> > >  Z must be X, not Y
> > >  Z should be X, not Y
> > >
> > > and more.
> > >
> > > What the pattern is most preferable?
> > >
> >
> > My preference is for "expected X, but found Y".
>
> If we are busy nitpicking, why are we saying "found Y"? Nothing was
> *found* by the callee, it just *got* an argument.
>
> So it should be "expected X, but got Y".
>
> Personally, I think the "but" is superfluous: the contradiction is
> already implied, so "expected X, got Y" is terser and conveys the
> meaning just as well.
>

I'm also fine with the terser version.

-Brett
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Generate all Argument Clinic code into separate files

2015-02-21 Thread Brett Cannon
On Sat Feb 21 2015 at 1:50:32 PM Serhiy Storchaka 
wrote:

> Currently for some files converted to use Argument Clinic the generated
> code is written into a separate file, for other files it is inlined.
>
> I'm going to make a number of small enhancement to Argument Clinic to
> generate faster code for parsing arguments (see for example issue23492).
> Every such step will produce large diffs for
> generated code and will create code churn if it is inlined and mixed up
> with handwritten code. It would be better when only generated files will
> be changed. So I suggest to move all inlined generated code in separate
> file. What are you think?
>

+1 to moving to a separate file for all .c files. Might be painful now but
the long-terms benefits are worth it.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: Issue #23152: Implement _Py_fstat() to support files larger than 2 GB on

2015-02-21 Thread Brett Cannon
On Sat Feb 21 2015 at 4:23:16 PM Ben Hoyt  wrote:

> When merging some changes while working on scandir, I noticed a minor
> issue with this commit:
>
> https://hg.python.org/cpython/rev/4f6f4aa0d80f
>
> The definition of "struct win32_stat" has been moved to fileutils.h and
> renamed to "struct _Py_stat_struct", which is fine -- however, the old
> "struct win32_stat" definition is still present (but unused) in
> posixmodule.c.
>
> So I think the old "struct win32_stat { ... }" definition can simply be
> removed from posixmodule.c now.
>

I don't think win32_stat is part of the stable ABI so as long as everything
keeps working then I don't see why it needs to stick around.


>
> Also, unrelated to this commit, I notice the _Py_attribute_data_to_stat
> function (was attribute_data_to_stat) can't fail and always returns 0, and
> all callers ignore its return value anyway. Can it be changed to return
> void?
>

Don't see why not since it's a private API.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Emit SyntaxWarning on unrecognized backslash escapes?

2015-02-23 Thread Brett Cannon
On Mon Feb 23 2015 at 10:55:23 AM Chris Angelico  wrote:

> On Tue, Feb 24, 2015 at 2:44 AM, Guido van Rossum 
> wrote:
> > I think that's a bit too strong. This has been unquestionably valid,
> correct
> > Python -- it was an intentional feature from the start. It may not have
> > turned out great, but I think that before warning loudly about every
> > instance of this we should have a silent deprecation (which you can turn
> > into a visible warning with a command-line flag or a warnings filter).
> And
> > we should have agreement that we're eventually going to make it a syntax
> > error.
>
> Is it at all possible for this to be introduced in the 2.x line, or is
> the entire concept of a deprecation period one that has to start with
> a minor version?
>

Starts with a minor version.


>
> If it's never going to happen in 2.x, I'll raise this as yet another

reason to get the course and all our students migrated to 3.x, but on
> the flip side, it means that we absolutely can't get the benefit until
> that jump is made.
>

Never going to happen in 2.x..
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] easy_install ?

2015-02-24 Thread Brett Cannon
On Tue Feb 24 2015 at 10:54:14 AM Laura Creighton  wrote:

> Hello all,
> I wonder what the status of easy_install is.  I keep finding people
> who needed to install something 'path.py' is the latest, who needed to
> use pip, and couldn't get easy_install to work.  Should we tell people
> that easy_install is deprecated, or ask them  to file bugs when
> they could not get it to work, or ...
>

Tell people to use pip. Having ensurepip in Python 2.7 and 3.4 makes it as
official as anything will be as the recommended tool to install projects.
Otherwise easy_install has nothing to do directly with python-dev so I
don't think we can comment on a group as to what people should do in terms
of bugs, etc.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Request for Pronouncement: PEP 441 - Improving Python ZIP Application Support

2015-02-24 Thread Brett Cannon
On Tue Feb 24 2015 at 3:21:30 PM Paul Moore  wrote:

> On 24 February 2015 at 18:58, Guido van Rossum  wrote:
> > Why no command-line equivalent for the other two methods?  I propose the
> > following interface: if there's only one positional argument, we're
> asking
> > to print its shebang line; if there are two and the input position is an
> > archive instead of a directory, we're copying.  (In the future people
> will
> > want an option to print more stuff, e.g. the main function or even a full
> > listing.)
>
> Thinking about this, there are 3 main uses:
>
> 1. Create an archive
> 2. Print the shebang
> 3. Change the shebang
>
> Of these, (1) is the crucial one.
>
> Basic usage should be
>
> python -m zipapp mydir [-o anothername.pyz] [-p interpreter] [-m
> entry:point]
>
> This zips up mydir to create an archive mydir.pyz. Options to change
> the target name, set a shebang line (side note: --python/-p or
> --interpreter/-i?) and set the entry point,
>
> I see this as pretty non-negotiable, this is the key use case that
> needs to be as simple as possible.
>
> To print the shebang, we could use
>
> python -m zipapp myapp.pyz --show
>
> This allows for future expansion by adding options, although most
> other things you might want to do (list the files, display
> __main__.py) can be done with a standard zip utility. I'm not keen on
> the option name --show, but I can't think of anything substantially
> better.
>
> To modify an archive could be done using
>
> python -m zipapp old.pyz new.pyz [-p interpreter]
>
> Default is to strip the shebang (no -p option). There's no option to
> omit the target and do an inplace update because I feel the default
> action (strip the shebang from the existing file with no backup) is
> too dangerous.
>
> To be explicit, "python -m zipapp app.pyz" will fail with a message
> "In-place editing of python zip applications is not supported".
>
> That seems to work.
>
> Open questions:
>
> 1. To create an archive, use -o target for an explicit target name, or
> just "target". The former is more conventional, the latter consistent
> with modification. Or we could make modification use a (mandatory) -o
> option.
>

EIBTI suggests requiring the -o. Pragmatic suggests just [in] [out] and use
context based on what kind of thing [in] points at as well as whether -p is
specified and whether it has an argument, which is the most minimal UX you
can have. Question is whether you can screw up by specifying the wrong
thing somehow (you might have to require that [out] doesn't already exist
to make it work).


> 2. -p/--python or -i/--interpreter for the shebang setting option
>

Since you are going to be using `python -m pyzip` then -i/--interpreter is
less redundant-looking on the command-line.


> 3. What to call the "show the shebang line" option


As suggested above, -p w/o an argument could do it, otherwise --show or
--info seems fine (I like --shebang, but that will probably be tough on
non-English speakers).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Request for Pronouncement: PEP 441 - Improving Python ZIP Application Support

2015-02-25 Thread Brett Cannon
On Wed, Feb 25, 2015 at 2:33 PM Paul Moore  wrote:

> On 25 February 2015 at 17:06, Paul Moore  wrote:
> >> Is the difference between create and copy important?  e.g., is there
> >> anything wrong with
> >>
> >> create_archive(old_archive, output=new_archive) working as well as
> >> create_archive(directory, archive)?
> >
> > Probably not, now. The semantics have converged enough that this might
> > be reasonable. It's how the command line interface works, after all.
> > It would mean that the behaviour would be different depending on the
> > value of the source argument (supplying the main argument and omitting
> > the target are only valid for create), but again that's how the
> > command line works.
> >
> > I'll have a go at implementing this change this evening and see how it
> > plays out.
>
> That worked out pretty well, IMO. The resulting API is a lot cleaner
> (internally, there's not much change, I still have a copy_archive
> function but it's now private). I've included the resulting API
> documentation below. It looks pretty good to me.
>
> Does anyone have any further suggestions or comments, or does this
> look ready to go back to Guido for a second review?
>

+1 from me.

-Brett


>
> Paul
>
> Python API
> --
>
> The module defines two convenience functions:
>
>
> .. function:: create_archive(directory, target=None, interpreter=None,
> main=None)
>
>Create an application archive from *source*.  The source can be any
>of the following:
>
>* The name of a directory, in which case a new application archive
>  will be created from the content of that directory.
>* The name of an existing application archive file, in which case the
>  file is copied to the target.  The file name should include the
>  ``.pyz`` extension, if required.
>* A file object open for reading in bytes mode.  The content of the
>  file should be an application archive, and the file object is
>  assumed to be positioned at the start of the archive.
>
>The *target* argument determines where the resulting archive will be
>written:
>
>* If it is the name of a file, the archive will be written to that
>  file.
>* If it is an open file object, the archive will be written to that
>  file object, which must be open for writing in bytes mode.
>* If the target is omitted (or None), the source must be a directory
>  and the target will be a file with the same name as the source, with
>  a ``.pyz`` extension added.
>
>The *interpreter* argument specifies the name of the Python
>interpreter with which the archive will be executed.  It is written as
>a "shebang" line at the start of the archive.  On POSIX, this will be
>interpreted by the OS, and on Windows it will be handled by the Python
>launcher.  Omitting the *interpreter* results in no shebang line being
>written.  If an interpreter is specified, and the target is a
>filename, the executable bit of the target file will be set.
>
>The *main* argument specifies the name of a callable which will be
>used as the main program for the archive.  It can only be specified if
>the source is a directory, and the source does not already contain a
>``__main__.py`` file.  The *main* argument should take the form
>"pkg.module:callable" and the archive will be run by importing
>"pkg.module" and executing the given callable with no arguments.  It
>is an error to omit *main* if the source is a directory and does not
>contain a ``__main__.py`` file, as otherwise the resulting archive
>would not be executable.
>
>If a file object is specified for *source* or *target*, it is the
>caller's responsibility to close it after calling create_archive.
>
>When copying an existing archive, file objects supplied only need
>``read`` and ``readline``, or ``write`` methods.  When creating an
>archive from a directory, if the target is a file object it will be
>passed to the ``zipfile.ZipFile`` class, and must supply the methods
>needed by that class.
>
> .. function:: get_interpreter(archive)
>
>Return the interpreter specified in the ``#!`` line at the start of the
>archive.  If there is no ``#!`` line, return :const:`None`.
>The *archive* argument can be a filename or a file-like object open
>for reading in bytes mode.  It is assumed to be at the start of the
> archive.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 review

2015-02-26 Thread Brett Cannon
On Thu, Feb 26, 2015 at 3:38 PM Ethan Furman  wrote:

> On 02/26/2015 12:19 PM, Guido van Rossum wrote:
>
> > As a follow-up, Joshua updated the PEP to remove *comprehensions, and it
> is now accepted.
>
> Congratulations Thomas, Joshua, and Neil!!
>

I'll add a "thanks" to everyone involved with the PEP since it was an
involved one implementation-wise and discussion-wise.

-Brett
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update to PEP 11 to clarify garnering platform support

2015-02-27 Thread Brett Cannon
On Fri, Feb 20, 2015 at 1:47 PM Brett Cannon  wrote:

> I just realized I actually never committed this change. Assuming no new
> objections I'll commit this in the near future (promise this time =).
>

My proposed changes have now been committed. Thanks to everyone who
provided feedback!

This should hopefully make it much clearer what it takes to accept
platform-specific patches (i.e., core dev willing to maintain the
compatibility and a stable buildbot for the platform).

For those trying to get Python working on Android, this will mean a
conversation will be necessary about how to get a buildbot or some form or
regular testing set up in order to accept Android-specific patches (along
with a core dev willing to keep an eye on the compatibility).

-Brett


>
>
> On Fri May 16 2014 at 1:51:00 PM Brett Cannon  wrote:
>
>> Here is some proposed wording. Since it is more of a clarification of
>> what it takes to garner support -- which is just a new section -- rather
>> than a complete rewrite I'm including just the diff to make it easier to
>> read the changes.
>>
>>
>> *diff -r 49d18bb47ebc pep-0011.txt*
>>
>> *--- a/pep-0011.txt Wed May 14 11:18:22 2014 -0400*
>>
>> *+++ b/pep-0011.txt Fri May 16 13:48:30 2014 -0400*
>>
>> @@ -2,22 +2,21 @@
>>
>>  Title: Removing support for little used platforms
>>
>>  Version: $Revision$
>>
>>  Last-Modified: $Date$
>>
>> -Author: mar...@v.loewis.de (Martin von Löwis)
>>
>> +Author: Martin von Löwis ,
>>
>> +Brett Cannon 
>>
>>  Status: Active
>>
>>  Type: Process
>>
>>  Content-Type: text/x-rst
>>
>>  Created: 07-Jul-2002
>>
>>  Post-History: 18-Aug-2007
>>
>> +  16-May-2014
>>
>>
>>
>>
>>
>>  Abstract
>>
>>  
>>
>>
>>
>> -This PEP documents operating systems (platforms) which are not
>>
>> -supported in Python anymore.  For some of these systems,
>>
>> -supporting code might be still part of Python, but will be removed
>>
>> -in a future release - unless somebody steps forward as a volunteer
>>
>> -to maintain this code.
>>
>> +This PEP documents how an operating system (platform) garners
>>
>> +support in Python as well as documenting past support.
>>
>>
>>
>>
>>
>>  Rationale
>>
>> @@ -37,16 +36,53 @@
>>
>>  change to the Python source code will work on all supported
>>
>>  platforms.
>>
>>
>>
>> -To reduce this risk, this PEP proposes a procedure to remove code
>>
>> -for platforms with no Python users.
>>
>> +To reduce this risk, this PEP specifies what is required for a
>>
>> +platform to be considered supported by Python as well as providing a
>>
>> +procedure to remove code for platforms with little or no Python
>>
>> +users.
>>
>>
>>
>> +Supporting platforms
>>
>> +
>>
>> +
>>
>> +Gaining official platform support requires two things. First, a core
>>
>> +developer needs to volunteer to maintain platform-specific code. This
>>
>> +core developer can either already be a member of the Python
>>
>> +development team or be given contributor rights on the basis of
>>
>> +maintaining platform support (it is at the discretion of the Python
>>
>> +development team to decide if a person is ready to have such rights
>>
>> +even if it is just for supporting a specific platform).
>>
>> +
>>
>> +Second, a stable buildbot must be provided [2]_. This guarantees that
>>
>> +platform support will not be accidentally broken by a Python core
>>
>> +developer who does not have personal access to the platform. For a
>>
>> +buildbot to be considered stable it requires that the machine be
>>
>> +reliably up and functioning (but it is up to the Python core
>>
>> +developers to decide whether to promote a buildbot to being
>>
>> +considered stable).
>>
>> +
>>
>> +This policy does not disqualify supporting other platforms
>>
>> +indirectly. Patches which are not platform-specific but still done to
>>
>> +add platform support will be considered for inclusion. For example,
>>
>> +if platform-independent changes were necessary in the configure
>>
>> +script which was motivated to support a specific platform that would
>>
>> +be accepted. Patches which add platform-specific code such a

Re: [Python-Dev] PEP 485 review (isclose())

2015-03-04 Thread Brett Cannon
On Wed, Mar 4, 2015 at 3:14 PM Chris Barker  wrote:

> On Tue, Mar 3, 2015 at 8:43 AM, Ethan Furman  wrote:
>
>> On 03/03/2015 01:17 AM, Victor Stinner wrote:
>>
>
>
>> > Maybe it's time to rename the math module to _math and create a
>> > math.py module, like _decimal/decimal? math.py should end with "from
>> > _math import *".
>>
>> +1
>>
>
> What do folks think? If we're going to do this, I'll write isclose() in
> python. And I could do the work to split it out, too, I suppose.
>

My vote -- as always -- is to do it in Python. If someone is sufficiently
motivated to re-implement in C then that's great, but I don't think it
should be required to be in C.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
Over on the import-sig I proposed eliminating the concept of .pyo files
since they only signify that *some* optimization took place, not
*what* optimizations
took place. Everyone on the SIG was positive with the idea so I wrote a
PEP, got positive feedback from the SIG again, and so now I present to you
PEP 488 for discussion.

There is no patch yet, but this is not a complicated change and I could get
it done at the sprints at PyCon if necessary (I suspect updating the test
suite will take the most work).

There are currently two open issues, although one is purely a bikeshed
topic on formatting of file names so I don't really consider it open for
change from what is proposed in the PEP without Guido saying he hates my
preference or someone having a really good argument for some alternative.
The second open issue on the common case file name is something to
reasonably debate and come to consensus on.

===

PEP: 488
Title: Elimination of PYO files
Version: $Revision$
Last-Modified: $Date$
Author: Brett Cannon 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 20-Feb-2015
Post-History:
2015-03-06

Abstract


This PEP proposes eliminating the concept of PYO files from Python.
To continue the support of the separation of bytecode files based on
their optimization level, this PEP proposes extending the PYC file
name to include the optimization level in bytecode repository
directory (i.e., the ``__pycache__`` directory).


Rationale
=

As of today, bytecode files come in two flavours: PYC and PYO. A PYC
file is the bytecode file generated and read from when no
optimization level is specified at interpreter startup (i.e., ``-O``
is not specified). A PYO file represents the bytecode file that is
read/written when **any** optimization level is specified (i.e., when
``-O`` is specified, including ``-OO``). This means that while PYC
files clearly delineate the optimization level used when they were
generated -- namely no optimizations beyond the peepholer -- the same
is not true for PYO files. Put in terms of optimization levels and
the file extension:

  - 0: ``.pyc``
  - 1 (``-O``): ``.pyo``
  - 2 (``-OO``): ``.pyo``

The reuse of the ``.pyo`` file extension for both level 1 and 2
optimizations means that there is no clear way to tell what
optimization level was used to generate the bytecode file. In terms
of reading PYO files, this can lead to an interpreter using a mixture
of optimization levels with its code if the user was not careful to
make sure all PYO files were generated using the same optimization
level (typically done by blindly deleting all PYO files and then
using the `compileall` module to compile all-new PYO files [1]_).
This issue is only compounded when people optimize Python code beyond
what the interpreter natively supports, e.g., using the astoptimizer
project [2]_.

In terms of writing PYO files, the need to delete all PYO files
every time one either changes the optimization level they want to use
or are unsure of what optimization was used the last time PYO files
were generated leads to unnecessary file churn. The change proposed
by this PEP also allows for **all** optimization levels to be
pre-compiled for bytecode files ahead of time, something that is
currently impossible thanks to the reuse of the ``.pyo`` file
extension for multiple optimization levels.

As for distributing bytecode-only modules, having to distribute both
``.pyc`` and ``.pyo`` files is unnecessary for the common use-case
of code obfuscation and smaller file deployments.


Proposal


To eliminate the ambiguity that PYO files present, this PEP proposes
eliminating the concept of PYO files and their accompanying ``.pyo``
file extension. To allow for the optimization level to be unambiguous
as well as to avoid having to regenerate optimized bytecode files
needlessly in the `__pycache__` directory, the optimization level
used to generate a PYC file will be incorporated into the bytecode
file name. Currently bytecode file names are created by
``importlib.util.cache_from_source()``, approximately using the
following expression defined by PEP 3147 [3]_, [4]_, [5]_::

'{name}.{cache_tag}.pyc'.format(name=module_name,
cache_tag=sys.implementation.cache_tag)

This PEP proposes to change the expression to::

'{name}.{cache_tag}.opt-{optimization}.pyc'.format(
name=module_name,
cache_tag=sys.implementation.cache_tag,
optimization=str(sys.flags.optimize))

The "opt-" prefix was chosen so as to provide a visual separator
from the cache tag. The placement of the optimization level after
the cache tag was chosen to preserve lexicographic sort order of
bytecode file names based on module name and cache tag which will
not vary for a single interpreter. The "opt-" prefix was chosen over
"o" so as to be somewhat self-documenting. The 

Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 1:03 PM Mark Shannon  wrote:

>
> On 06/03/15 16:34, Brett Cannon wrote:
> > Over on the import-sig I proposed eliminating the concept of .pyo files
> > since they only signify that /some/ optimization took place, not
> > /what/ optimizations took place. Everyone on the SIG was positive with
> > the idea so I wrote a PEP, got positive feedback from the SIG again, and
> > so now I present to you PEP 488 for discussion.
> >
> [snip]
>
> Historically -O and -OO have been the antithesis of optimisation, they
> change the behaviour of the program with no noticeable effect on
> performance.
> If a change is to be made, why not just drop .pyo files and be done with
> it?
>

I disagree with your premise that .pyo files don't have a noticeable effect
on performance. If you don't use asserts a lot then there is no effect, but
if you use them heavily or have them perform expensive calculations then
there is an impact. And the dropping of docstrings does have an impact on
memory usage when you use Python at scale.

You're also assuming that we will never develop an AST optimizer that will
go beyond what the peepholer can do based on raw bytecode, or something
that involves a bit of calculation and thus something you wouldn't want to
do at startup.


>
> Any worthwhile optimisation needs to be done at runtime or involve much
> more than tweaking bytecode.
>

I disagree again. If you do something like whole program analysis and want
to use that to optimize something, you will surface that through bytecode
and not editing the source. So while you are doing "much more than tweaking
bytecode" externally to Python, you still have to surface to the
interpreter through bytecode.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 1:27 PM Neil Girdhar  wrote:

> On Fri, Mar 6, 2015 at 1:11 PM, Brett Cannon  wrote:
>
>>
>>
>> On Fri, Mar 6, 2015 at 1:03 PM Mark Shannon  wrote:
>>
>>>
>>> On 06/03/15 16:34, Brett Cannon wrote:
>>> > Over on the import-sig I proposed eliminating the concept of .pyo files
>>> > since they only signify that /some/ optimization took place, not
>>> > /what/ optimizations took place. Everyone on the SIG was positive with
>>> > the idea so I wrote a PEP, got positive feedback from the SIG again,
>>> and
>>> > so now I present to you PEP 488 for discussion.
>>> >
>>> [snip]
>>>
>>> Historically -O and -OO have been the antithesis of optimisation, they
>>> change the behaviour of the program with no noticeable effect on
>>> performance.
>>> If a change is to be made, why not just drop .pyo files and be done with
>>> it?
>>>
>>
>> I disagree with your premise that .pyo files don't have a noticeable
>> effect on performance. If you don't use asserts a lot then there is no
>> effect, but if you use them heavily or have them perform expensive
>> calculations then there is an impact. And the dropping of docstrings does
>> have an impact on memory usage when you use Python at scale.
>>
>> You're also assuming that we will never develop an AST optimizer that
>> will go beyond what the peepholer can do based on raw bytecode, or
>> something that involves a bit of calculation and thus something you
>> wouldn't want to do at startup.
>>
>
> I don't want to speak for him, but you're going to get the best results
> optimizing ASTs at runtime, which is what I thought he was suggesting.
> Trying to optimize Python at compile time is setting your sights really
> low.   You have so little information then.
>

OK, I don't want to derail the discussion of the PEP into one over how best
to optimize CPython's performance relative to bytecode vs. runtime like
PyPy. The point is that we have -O and -OO and people do have uses for
those flags. People can also do custom optimizations thanks to the
flexibility of loaders.

All of this leads to wanting different bytecode files for different
optimization levels to make sure you're actually executing your code with
the optimizations you expect. If people think that optimizing code and
surfacing it in bytecode files is a waste and want to suggest either
dropping .pyo files entirely or dropping -O and only having -OO that's
fine, but that is not what this PEP is proposing nor a PEP I want to bother
writing.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
Thanks! All suggestions applied to my local copy.

On Fri, Mar 6, 2015 at 1:55 PM Ethan Furman  wrote:

> On 03/06/2015 08:34 AM, Brett Cannon wrote:
> > Over on the import-sig I proposed eliminating the concept of .pyo files
> since they only signify that /some/ optimization
> > took place, not /what/ optimizations took place. Everyone on the SIG was
> positive with the idea so I wrote a PEP, got
> > positive feedback from the SIG again, and so now I present to you PEP
> 488 for discussion.
>
> +1 overall, comments in-line.
>
> > Implementation
> > ==
> >
> > importlib
> > -
> >
> > As ``importlib.util.cache_from_source()`` is the API that exposes
> > bytecode file paths as while as being directly used by importlib, it
> > requires the most critical change.
>
> Not sure what that sentence is supposed to say -- maybe "as well as" and
> not "as while as" ?
>
>
> > The ``debug_override`` parameter will be deprecated. As the parameter
> > expects a boolean, the integer value of the boolean will be used as
> > if it had been provided as the argument to ``optimization`` (a
> > ``None`` argument will mean the same as for ``optimization``). A
> > deprecation warning will be raised when ``debug_override`` is given a
> > value other than ``None``, but there are no plans for the complete
> > removal of the parameter as this time (but removal will be no later
> > than Python 4).
>
> "at this time" not "as this time"
>
>
> > Rest of the standard library
> > 
> >
> > The various functions exposed by the ``py_compile`` and
> > ``compileall`` functions will be updated as necessary to make sure
> > they follow the new bytecode file name semantics [6]_, [1]_. The CLI
> > for the ``compileall`` module will not be directly affected (the
> > ``-b`` flag will be implicitly as it will no longer generate ``.pyo``
> > files when ``-O`` is specified).
>
> "will be implicit" not "will be implicitly"
>
> --
> ~Ethan~
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 2:09 PM Benjamin Peterson 
wrote:

>
>
> On Fri, Mar 6, 2015, at 13:34, Brett Cannon wrote:
> > On Fri, Mar 6, 2015 at 1:27 PM Neil Girdhar 
> > wrote:
> >
> > > On Fri, Mar 6, 2015 at 1:11 PM, Brett Cannon  wrote:
> > >
> > >>
> > >>
> > >> On Fri, Mar 6, 2015 at 1:03 PM Mark Shannon  wrote:
> > >>
> > >>>
> > >>> On 06/03/15 16:34, Brett Cannon wrote:
> > >>> > Over on the import-sig I proposed eliminating the concept of .pyo
> files
> > >>> > since they only signify that /some/ optimization took place, not
> > >>> > /what/ optimizations took place. Everyone on the SIG was positive
> with
> > >>> > the idea so I wrote a PEP, got positive feedback from the SIG
> again,
> > >>> and
> > >>> > so now I present to you PEP 488 for discussion.
> > >>> >
> > >>> [snip]
> > >>>
> > >>> Historically -O and -OO have been the antithesis of optimisation,
> they
> > >>> change the behaviour of the program with no noticeable effect on
> > >>> performance.
> > >>> If a change is to be made, why not just drop .pyo files and be done
> with
> > >>> it?
> > >>>
> > >>
> > >> I disagree with your premise that .pyo files don't have a noticeable
> > >> effect on performance. If you don't use asserts a lot then there is no
> > >> effect, but if you use them heavily or have them perform expensive
> > >> calculations then there is an impact. And the dropping of docstrings
> does
> > >> have an impact on memory usage when you use Python at scale.
> > >>
> > >> You're also assuming that we will never develop an AST optimizer that
> > >> will go beyond what the peepholer can do based on raw bytecode, or
> > >> something that involves a bit of calculation and thus something you
> > >> wouldn't want to do at startup.
> > >>
> > >
> > > I don't want to speak for him, but you're going to get the best results
> > > optimizing ASTs at runtime, which is what I thought he was suggesting.
> > > Trying to optimize Python at compile time is setting your sights really
> > > low.   You have so little information then.
> > >
> >
> > OK, I don't want to derail the discussion of the PEP into one over how
> > best
> > to optimize CPython's performance relative to bytecode vs. runtime like
> > PyPy. The point is that we have -O and -OO and people do have uses for
> > those flags. People can also do custom optimizations thanks to the
> > flexibility of loaders.
>
> I think it would be preferable deprecate -O and -OO and replace them
> with flags like --no-docstrings or --no-asserts. Ideally, "optimization"
> levels shouldn't change program semantics.
>

OK, but that doesn't influence the PEP's goal of dropping .pyo files.

Are you suggesting that the tag be changed to be less specific to
optimizations and more free-form? Like
`importlib.cpython-35.__no-asserts_no-docstrings__.pyc`? Otherwise stuff
like this gets baked into the .pyc file itself instead of the file name,
but I don't think we should just drop the ability to switch off asserts and
docstrings like Mark seemed to be suggesting.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 3:37 PM Antoine Pitrou  wrote:

> On Fri, 06 Mar 2015 18:11:19 +
> Brett Cannon  wrote:
> > And the dropping of docstrings does have an impact on
> > memory usage when you use Python at scale.
>
> What kind of "scale" are you talking about? Do you have any numbers
> about such impact?
>

I know YouTube at least uses -OO and I don't have numbers to share (numbers
I were last shown were years ago and I wouldn't be authorized to share
anyway, but I do know they still use -OO).


>
> > You're also assuming that we will never develop an AST optimizer
>
> No, the assumption is that we don't have such an optimizer *right now*.
> Having command-line options because they might be useful some day is
> silly.
>

I'm not talking about changing any command-line option in the PEP so I
don't know what you're referring to.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 5:47 PM Antoine Pitrou  wrote:

> On Sat, 7 Mar 2015 09:34:20 +1100
> Steven D'Aprano  wrote:
>
> > On Fri, Mar 06, 2015 at 09:37:05PM +0100, Antoine Pitrou wrote:
> > > On Fri, 06 Mar 2015 18:11:19 +
> > > Brett Cannon  wrote:
> > > > And the dropping of docstrings does have an impact on
> > > > memory usage when you use Python at scale.
> > >
> > > What kind of "scale" are you talking about? Do you have any numbers
> > > about such impact?
> > >
> > > > You're also assuming that we will never develop an AST optimizer
> > >
> > > No, the assumption is that we don't have such an optimizer *right now*.
> > > Having command-line options because they might be useful some day is
> > > silly.
> >
> > Quoting the PEP:
> >
> > This issue is only compounded when people optimize Python
> > code beyond what the interpreter natively supports, e.g.,
> > using the astoptimizer project [2]_.
>
> The astoptimizer project is not part of Python. It's third-party
> software that has no relationship to .pyo files.
>

Directly, no. But the point is that the PEP enables the astoptimizer
project to write out .pyc files specifying different optimizations that
won't clash with -O or -OO .pyc files.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Brett Cannon
On Fri, Mar 6, 2015 at 6:49 PM Benjamin Peterson 
wrote:

>
>
> On Fri, Mar 6, 2015, at 15:11, Brett Cannon wrote:
> >
> > OK, but that doesn't influence the PEP's goal of dropping .pyo files.
>
> Correct.
>
> >
> > Are you suggesting that the tag be changed to be less specific to
> > optimizations and more free-form? Like
> > `importlib.cpython-35.__no-asserts_no-docstrings__.pyc`? Otherwise stuff
> > like this gets baked into the .pyc file itself instead of the file name,
> > but I don't think we should just drop the ability to switch off asserts
> > and
> > docstrings like Mark seemed to be suggesting.
>
> Basically, though the filename strings could perhaps be more compact.
>

That's fine. Do you have a file name format you want to propose then
instead of "opt-{}" (which is what I'm assuming your "basically" is
referring to)?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-07 Thread Brett Cannon
On Sat, Mar 7, 2015 at 9:29 AM Ron Adam  wrote:

>
>
> On 03/07/2015 04:58 AM, Steven D'Aprano wrote:
> > On Fri, Mar 06, 2015 at 08:00:20PM -0500, Ron Adam wrote:
> >
> >> >Have you considered doing this by having different magic numbers in the
> >> >.pyc file for standard, -O, and -O0 compiled bytecode files?  Python
> >> >already checks that number and recompiles the files if it's not what
> it's
> >> >expected to be.  And it wouldn't require any naming conventions or new
> >> >cache directories.  It seems to me it would be much easier to do as
> well.
> > And it would fail to solve the problem. The problem isn't just that the
> > .pyo file can contain the wrong byte-code for the optimization level,
> > that's only part of the problem. Another issue is that you cannot have
> > pre-compiled byte-code for multiple different optimization levels. You
> > can have a "no optimization" byte-code file, the .pyc file, but only one
> > "optimized" byte-code file at the same time.
> >
> > Brett's proposal will allow -O optimized and -OO optimized byte-code
> > files to co-exist, as well as setting up a clear naming convention for
> > future optimizers in either the Python compiler or third-party
> > optimizers.
>
> So all the different versions can be generated ahead of time. I think that
> is the main difference.
>
> My suggestion would cause a recompile of all dependent python files when
> different optimisation levels are used in different projects. Which may be
> worse than not generating bytecode files at all.  OK
>
>
> A few questions...
>
> Can a submodule use an optimazation level that is different from the file
> that imports it?   (Other than the case this is trying to solve.)
>

Currently yes, with this PEP no (without purposefully doing it with some
custom loader).


>
> Is there way to specify that an imported module not use any optimisation
> level, or to always use a specific optimisation level?
>

Not without a custom loader.


>
> Is there a way to run tests with all the different optimisation levels?
>

You have to remember you can't change the optimization levels of the
interpreter once you have started it up. The change in semantics is handled
deep in the AST compiler and there is no exposed way to flip-flop the
setting once the interpreter starts. So to test the different optimization
levels would require either (a) implementing the optimizations are part of
some AST optimizer and doing the right thing in terms of reloading the
modules, or (b) simply running the tests again by running the interpreter
again with different flags (this is when something like tox is useful).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   6   7   8   9   10   >