Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Greg Ewing

Guido van Rossum wrote:

The issue here is that asyncio only interprets StopIteration as 
returning from the generator (with a possible value), while a Trollius 
coroutine must use "raise Return()" to specify a return value; 
this works as long as Return is a subclass of StopIteration, but PEP 479 
will break this by replacing the StopIteration with RuntimeError.


I don't understand. If I'm interpreting PEP 479 correctly, in
'x = yield from foo', a StopIteration raised by foo.__next__()
doesn't get turned into a RuntimeError; rather it just stops the
sub-iteration as usual and its value attribute gets assigned to x.

As long as a Trollius coroutine behaves like something implementing
the iterator protocol, it should continue to work fine with
Return as a subclass of StopIteration.

Or is there something non-obvious about Trollius that I'm
missing?

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Victor Stinner
2014-11-28 10:12 GMT+01:00 Greg Ewing :
> I don't understand. If I'm interpreting PEP 479 correctly, in
> 'x = yield from foo', a StopIteration raised by foo.__next__()
> doesn't get turned into a RuntimeError

The Trollius coroutine uses "raise Return(value)" which is basically a
"raise StopIteraton(value)", and this is forbidden by the PEP 479.
With the PEP 479, the StopIteration is replaced with a RuntimeError.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Chris Angelico
On Fri, Nov 28, 2014 at 8:18 PM, Victor Stinner
 wrote:
> 2014-11-28 10:12 GMT+01:00 Greg Ewing :
>> I don't understand. If I'm interpreting PEP 479 correctly, in
>> 'x = yield from foo', a StopIteration raised by foo.__next__()
>> doesn't get turned into a RuntimeError
>
> The Trollius coroutine uses "raise Return(value)" which is basically a
> "raise StopIteraton(value)", and this is forbidden by the PEP 479.
> With the PEP 479, the StopIteration is replaced with a RuntimeError.

The question, I guess, is: Why can't it be translated into "return
value"? One answer is: Because that's not legal in Python 2.7. And I
can't respond to that answer, unfortunately. That's the one major
backward compat issue.

(Another answer may be "Because it would require changes to many
intermediate generators, not all of which are under our control". If
that's the issue, then it'll simply be a matter of telling people
"When you upgrade to Python 3.6, you will start to see warnings unless
you make this change".)

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Victor Stinner
2014-11-28 3:49 GMT+01:00 Nick Coghlan :
> I think between contextlib and Trollius, the case is starting to be
> made for raising an UnhandledStopIteration subclass of RuntimeError,
> rather than a generic RuntimeError.

I modified Trollius to test such idea:

* Return inherits from Exception (not from StopIteration)
* on Python 3, @trollius.coroutine wraps the coroutine to catch
Runtimerror: if the exc.__context__ is a StopIteration, return
exc.__context__.value

The test suite pass with such additional coroutine wrapper on Python
3.5 patched with pep479.patch (and unpatched Python 3.3).

So yes, it may help to have a new specialized exception, even if "it
works" with RuntimeError.

The drawback is that a new layer would make trollius even slower.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Olemis Lang
off-topic , not about asyncio but related to the PEP and other things
been discussed in this thread

On 11/28/14, Victor Stinner  wrote:
> 2014-11-28 3:49 GMT+01:00 Nick Coghlan :
>
[...]
>
> So yes, it may help to have a new specialized exception, even if "it
> works" with RuntimeError.
>

This is somehow the situation I tried to explain in another thread
about PEP 479 (though I did not use the right words) and will be a
very common situation in practice .

> The drawback is that a new layer would make trollius even slower.
>

e.g. in a (private) library I wrote for a company that's basically
about composition of generators there is a situation similar to what
Victor explained in this thread . I mostly would have to end-up doing
one of a couple of things

try:
   ...
except RuntimeError:
   return

which over-complicates function definition and introduces a long chain
of (redundant) exception handling code just to end up raising
StopIteration once again (i.e. poor performance) or ...

# decorate functions in the public API
# ... may be improved but you get the idea
def myown_stopiter(f)
def wrapper(*args, **kwargs):
...
try:
...
except RuntimeError as exc:
if isinstance(exc.args[0], StopIteration):
raise StopIteration # exc.args[0] ?
else:
raise
...
return wrapper

which is actually a re-implementation of exception matching itself

Otherwise ...

{{{#!py

# in generator definition
# rather than natural syntax for defining sequence logic
raise MyOwnException(...)

# decorate functions in the public API
# ... may be improved but you get the idea

def myown_stopiter(f)
def wrapper(*args, **kwargs):
...
try:
...
except MyOwnException:
raise StopIteration
...
return wrapper
}}}

In the two las cases the library ends up having two functions , the
one that allows (MyOwnException | RuntimeError) to bubble up (only
used for defining compositions) , and the one that translates the
exception (which *should* not be used for compositions, even if it
will work, because of performance penalties) ... thus leading to
further complications at API level ...

Built-in behavior consisting in raising a subclass of RuntimeError is
a much better approach similar to the second case mentioned above .
This might definitely help to make less painful the process of
rewriting things all over to cope with incompatibilities caused by PEP
479 , but afaict performance issues will be there for a while .

-- 
Regards,

Olemis - @olemislc

Apache(tm) Bloodhound contributor
http://issues.apache.org/bloodhound
http://blood-hound.net

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Olemis Lang
correction ...

On 11/28/14, Olemis Lang  wrote:
>
> try:
>...
> except RuntimeError:
>return
>

... should be

{{{#!py

# inside generator function body

try:
   ...
except StopIteration:
   return
}}}

[...]

-- 
Regards,

Olemis - @olemislc

Apache(tm) Bloodhound contributor
http://issues.apache.org/bloodhound
http://blood-hound.net

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summary of Python tracker Issues

2014-11-28 Thread Python tracker

ACTIVITY SUMMARY (2014-11-21 - 2014-11-28)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open4668 (+10)
  closed 30056 (+42)
  total  34724 (+52)

Open issues with patches: 2176 


Issues opened (35)
==

#22676: _pickle's whichmodule() is slow
http://bugs.python.org/issue22676  reopened by zach.ware

#22685: memory leak: no transport for pipes by create_subprocess_exec/
http://bugs.python.org/issue22685  reopened by koobs

#22912: urlretreive locks up in 2.7.8
http://bugs.python.org/issue22912  opened by TaylorSMarks

#22914: Rewrite of Python 2/3 porting HOWTO
http://bugs.python.org/issue22914  opened by brett.cannon

#22918: Doc for __iter__ makes inexact comment about dict.__iter__
http://bugs.python.org/issue22918  opened by eric.araujo

#22919: Update PCBuild for VS 2015
http://bugs.python.org/issue22919  opened by steve.dower

#22922: asyncio: call_soon() should raise an exception if the event lo
http://bugs.python.org/issue22922  opened by haypo

#22923: No prompt for "display all X possibilities" on completion-enab
http://bugs.python.org/issue22923  opened by yoha

#22924: Use of deprecated cgi.escape
http://bugs.python.org/issue22924  opened by serhiy.storchaka

#22926: asyncio: raise an exception when called from the wrong thread
http://bugs.python.org/issue22926  opened by haypo

#22928: HTTP header injection in urrlib2/urllib/httplib/http.client
http://bugs.python.org/issue22928  opened by Guido

#22931: cookies with square brackets in value
http://bugs.python.org/issue22931  opened by Waldemar.Parzonka

#22932: email.utils.formatdate uses unreliable time.timezone constant
http://bugs.python.org/issue22932  opened by mitya57

#22933: Misleading sentence in doc for shutil.move
http://bugs.python.org/issue22933  opened by newbie

#22935: Disabling SSLv3 support
http://bugs.python.org/issue22935  opened by kroeckx

#22936: traceback module has no way to show locals
http://bugs.python.org/issue22936  opened by rbcollins

#22937: unittest errors can't show locals
http://bugs.python.org/issue22937  opened by rbcollins

#22939: integer overflow in iterator object
http://bugs.python.org/issue22939  opened by hakril

#22941: IPv4Interface arithmetic changes subnet mask
http://bugs.python.org/issue22941  opened by kwi.dk

#22942: Language Reference - optional comma
http://bugs.python.org/issue22942  opened by jordan

#22943: bsddb: test_queue fails on Windows
http://bugs.python.org/issue22943  opened by benjamin.peterson

#22945: Ctypes inconsistent between Linux and OS X
http://bugs.python.org/issue22945  opened by Daniel.Standage

#22946: urllib gives incorrect url after open when using HTTPS
http://bugs.python.org/issue22946  opened by John.McKay

#22947: Enable 'imageop' - "Multimedia Srvices Feature module" for 64-
http://bugs.python.org/issue22947  opened by pankaj.s01

#22949: fnmatch.translate doesn't add ^ at the beginning
http://bugs.python.org/issue22949  opened by mstol

#22951: unexpected return from float.__repr__() for inf, -inf, nan
http://bugs.python.org/issue22951  opened by jaebae17

#22952: multiprocessing doc introduction not in affirmative tone
http://bugs.python.org/issue22952  opened by davin

#22953: Windows installer configures system PATH also when installing 
http://bugs.python.org/issue22953  opened by pekka.klarck

#22955: Pickling of methodcaller and attrgetter
http://bugs.python.org/issue22955  opened by Antony.Lee

#22956: Improved support for prepared SQL statements
http://bugs.python.org/issue22956  opened by elfring

#22958: Constructors of weakref mapping classes don't accept "self" an
http://bugs.python.org/issue22958  opened by serhiy.storchaka

#22959: http.client.HTTPSConnection checks hostname when SSL context h
http://bugs.python.org/issue22959  opened by zodalahtathi

#22960: xmlrpc.client.ServerProxy() should accept a custom SSL context
http://bugs.python.org/issue22960  opened by zodalahtathi

#22961: ctypes.WinError & OSError
http://bugs.python.org/issue22961  opened by simonzack

#22962: ipaddress: Add optional prefixlen argument to ip_interface and
http://bugs.python.org/issue22962  opened by Gary.van.der.Merwe



Most recent 15 issues with no replies (15)
==

#22962: ipaddress: Add optional prefixlen argument to ip_interface and
http://bugs.python.org/issue22962

#22960: xmlrpc.client.ServerProxy() should accept a custom SSL context
http://bugs.python.org/issue22960

#22959: http.client.HTTPSConnection checks hostname when SSL context h
http://bugs.python.org/issue22959

#22958: Constructors of weakref mapping classes don't accept "self" an
http://bugs.python.org/issue22958

#22956: Improved support for prepared SQL statements
http://bugs.python.org/issue22956

#22947: Enable 'imageop' - "Multimedia Srvices Feature module" for 64-
http://bugs.python.org/issue22947


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-28 Thread Demian Brecht
On Tue, Nov 25, 2014 at 6:52 AM, Brett Cannon  wrote:
>
> I suspect if we make sure we add Bitbucket and GitHub login support to the 
> issue tracker then that would help go a fair distance to helping with the 
> GitHub pull of reach (and if we make it so people can simply paste in their 
> fork's URL into the issue tracker and we simply grab a new patch for review 
> that would go even farther).

Chiming in horribly late, so hopefully this hasn't already been
mentioned (I've only loosely been following this thread).

In addition to the login support (I'm not sold on how much that would
help the reach), I think it would be really beneficial to have some
documentation on either emulating git-style workflow in hg or
detailing a git fork workflow while working on multiple patches
concurrently and keeping master in sync with hg default (or perhaps
even both).

I primarily use git for development. Having little or no effort to
context switch to work on CPython in any capacity (PEPs, code, etc)
would be hugely beneficial for me. Having a well defined workflow in
the docs (perhaps alongside "Lifecycle of a patch"?) would have
significantly lowered the initial barrier of entry for me. Given the
amount of time I put into that initially, I can only imagine how many
people it's entirely turned away from contributing. I'd definitely be
interested in contributing documentation around this (I've written up
something similar here
http://demianbrecht.github.io/vcs/2014/07/31/from-git-to-hg/) if
others feel that it would be valuable.

IMHO, you don't want to limit submissions due to the tech stack (one
of the arguments I've seen for not moving to Github was quality of
submissions). This will also limit high quality work from those who
simply don't have time to adopt new tech and workflows when they're
not being paid to do so. I have no strong opinion of where and how the
official repos are stored so long as I can work on them and contribute
to them in the way that's most efficient for me. I imagine that
statement would also hold true for most.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479 and asyncio

2014-11-28 Thread Guido van Rossum
@Victor: I'm glad you found a work-around. Maybe you can let your users
control it with a flag? It is often true that straddling code pays a
performance cost. Hopefully the slight performance dip might be an
incentive for people to start thinking about porting to asyncio.

@Olemis: You never showed examples of how your code would be used, so it's
hard to understand what you're trying to do and how PEP 479 affects you.

On Fri, Nov 28, 2014 at 7:21 AM, Olemis Lang  wrote:

> correction ...
>
> On 11/28/14, Olemis Lang  wrote:
> >
> > try:
> >...
> > except RuntimeError:
> >return
> >
>
> ... should be
>
> {{{#!py
>
> # inside generator function body
>
> try:
>...
> except StopIteration:
>return
> }}}
>
> [...]
>
> --
> Regards,
>
> Olemis - @olemislc
>
> Apache(tm) Bloodhound contributor
> http://issues.apache.org/bloodhound
> http://blood-hound.net
>
> Blog ES: http://simelo-es.blogspot.com/
> Blog EN: http://simelo-en.blogspot.com/
>
> Featured article:
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Please reconsider PEP 479.

2014-11-28 Thread Raymond Hettinger

> On Nov 27, 2014, at 8:52 AM, Guido van Rossum  wrote:
> 
> I understand that @allow_import_stop represents a compromise, an attempt at 
> calming the waves that PEP 479 has caused. But I still want to push back 
> pretty hard on this idea.
> 
> - It means we're forever stuck with two possible semantics for StopIteration 
> raised in generators.
> 
> - It complicates the implementation, because (presumably) a generator marked 
> with @allow_stop_import should not cause a warning when a StopIteration 
> bubbles out -- so we actually need another flag to silence the warning.
> 
> - I don't actually know whether other Python implementations have the ability 
> to copy code objects to change flags.
> 
> - It actually introduces a new incompatibility, that has to be solved in 
> every module that wants to use it (as you show above), whereas just putting 
> try/except around unguarded next() calls is fully backwards compatible.
> 
> - Its existence encourage people to use the decorator in favor of fixing 
> their code properly.
> 
> - The decorator is so subtle that it probably needs to be explained to 
> everyone who encounters it (and wasn't involved in this PEP discussion). 
> Because of this I would strongly advise against using it to "fix" the 
> itertools examples in the docs; it's just too magical. (IIRC only 2 examples 
> actually depend on this.)

I concur.  PEP 479 fixes are trivially easy to do without a decorator.

After Guido pronounced on the PEP, I fixed-up several parts of the standard 
library in just a few minutes.  It's not hard.
https://mail.python.org/pipermail/python-checkins/2014-November/133252.html 

https://mail.python.org/pipermail/python-checkins/2014-November/133253.html 


Also, I'm submitting a 479 patch to the Django project so we won't have to 
worry about this one.

I recommend that everyone just accept that the PEP is a done deal and stop 
adding complexity or work-arounds.  We have a lot of things going for us on 
this one:  1) the affected code isn't common-place (mostly in producer/consumer 
middleware tools created by tool makers rather than by tool users), 2) the 
RuntimeError is immediate and clear about both the cause and the repair, 3) the 
fixes are trivially easy to make (add try/except around next() calls and 
replace "raise StopIteration" with "return").

Ideally, everyone will let this die and go back to being with family for the 
holidays (or back to work if you don't have a holiday this week).


Raymond___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Please reconsider PEP 479.

2014-11-28 Thread Guido van Rossum
Thanks for being a good sport, Raymond! I've probably spent too much time
fretting about this, so thanks for the reminder. I want to get back to
other things too, in particular the type hinting PEP: there's a draft, but
there are many things we --the co-authors-- want to change before we bother
the community with another review. And that one will certainly take longer
than five days!

On Fri, Nov 28, 2014 at 12:01 PM, Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

>
> On Nov 27, 2014, at 8:52 AM, Guido van Rossum  wrote:
>
> I understand that @allow_import_stop represents a compromise, an attempt
> at calming the waves that PEP 479 has caused. But I still want to push back
> pretty hard on this idea.
>
> - It means we're forever stuck with two possible semantics for
> StopIteration raised in generators.
>
> - It complicates the implementation, because (presumably) a generator
> marked with @allow_stop_import should not cause a warning when a
> StopIteration bubbles out -- so we actually need another flag to silence
> the warning.
>
> - I don't actually know whether other Python implementations have the
> ability to copy code objects to change flags.
>
> - It actually introduces a new incompatibility, that has to be solved in
> every module that wants to use it (as you show above), whereas just putting
> try/except around unguarded next() calls is fully backwards compatible.
>
> - Its existence encourage people to use the decorator in favor of fixing
> their code properly.
>
> - The decorator is so subtle that it probably needs to be explained to
> everyone who encounters it (and wasn't involved in this PEP discussion).
> Because of this I would strongly advise against using it to "fix" the
> itertools examples in the docs; it's just too magical. (IIRC only 2
> examples actually depend on this.)
>
>
> I concur.  PEP 479 fixes are trivially easy to do without a decorator.
>
> After Guido pronounced on the PEP, I fixed-up several parts of the
> standard library in just a few minutes.  It's not hard.
> https://mail.python.org/pipermail/python-checkins/2014-November/133252.html
> https://mail.python.org/pipermail/python-checkins/2014-November/133253.html
>
> Also, I'm submitting a 479 patch to the Django project so we won't have to
> worry about this one.
>
> I recommend that everyone just accept that the PEP is a done deal and stop
> adding complexity or work-arounds.  We have a lot of things going for us on
> this one:  1) the affected code isn't common-place (mostly in
> producer/consumer middleware tools created by tool makers rather than by
> tool users), 2) the RuntimeError is immediate and clear about both the
> cause and the repair, 3) the fixes are trivially easy to make (add
> try/except around next() calls and replace "raise StopIteration" with
> "return").
>
> Ideally, everyone will let this die and go back to being with family for
> the holidays (or back to work if you don't have a holiday this week).
>
>
> Raymond
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-28 Thread Nathaniel Smith
Hi all,

There was some discussion on python-ideas last month about how to make
it easier/more reliable for a module to override attribute access.
This is useful for things like autoloading submodules (accessing
'foo.bar' triggers the import of 'bar'), or for deprecating module
attributes that aren't functions. (Accessing 'foo.bar' emits a
DeprecationWarning, "the bar attribute will be removed soon".) Python
has had some basic support for this for a long time -- if a module
overwrites its entry in sys.modules[__name__], then the object that's
placed there will be returned by 'import'. This allows one to define
custom subclasses of module and use them instead of the default,
similar to how metaclasses allow one to use custom subclasses of
'type'.

In practice though it's very difficult to make this work safely and
correctly for a top-level package. The main problem is that when you
create a new object to stick into sys.modules, this necessarily means
creating a new namespace dict. And now you have a mess, because now
you have two dicts: new_module.__dict__ which is the namespace you
export, and old_module.__dict__, which is the globals() for the code
that's trying to define the module namespace. Keeping these in sync is
extremely error-prone -- consider what happens, e.g., when your
package __init__.py wants to import submodules which then recursively
import the top-level package -- so it's difficult to justify for the
kind of large packages that might be worried about deprecating entries
in their top-level namespace. So what we'd really like is a way to
somehow end up with an object that (a) has the same __dict__ as the
original module, but (b) is of our own custom module subclass. If we
can do this then metamodules will become safe and easy to write
correctly.

(There's a little demo of working metamodules here:
   https://github.com/njsmith/metamodule/
but it uses ctypes hacks that depend on non-stable parts of the
CPython ABI, so it's not a long-term solution.)

I've now spent some time trying to hack this capability into CPython
and I've made a list of the possible options I can think of to fix
this. I'm writing to python-dev because none of them are obviously The
Right Way so I'd like to get some opinions/ruling/whatever on which
approach to follow up on.

Option 1: Make it possible to change the type of a module object
in-place, so that we can write something like

   sys.modules[__name__].__class__ = MyModuleSubclass

Option 1 downside: The invariants required to make __class__
assignment safe are complicated, and only implemented for
heap-allocated type objects. PyModule_Type is not heap-allocated, so
making this work would require lots of delicate surgery to
typeobject.c. I'd rather not go down that rabbit-hole.



Option 2: Make PyModule_Type into a heap type allocated at interpreter
startup, so that the above just works.

Option 2 downside: PyModule_Type is exposed as a statically-allocated
global symbol, so doing this would involve breaking the stable ABI.



Option 3: Make it legal to assign to the __dict__ attribute of a
module object, so that we can write something like

   new_module = MyModuleSubclass(...)
   new_module.__dict__ = sys.modules[__name__].__dict__
   sys.modules[__name__].__dict__ = {} # ***
   sys.modules[__name__] = new_module

The line marked *** is necessary because the way modules are designed,
they expect to control the lifecycle of their __dict__. When the
module object is initialized, it fills in a bunch of stuff in the
__dict__. When the module object (not the dict object!) is
deallocated, it deletes everything from the __dict__. This latter
feature in particular means that having two module objects sharing the
same __dict__ is bad news.

Option 3 downside: The paragraph above. Also, there's stuff inside the
module struct besides just the __dict__, and more stuff has appeared
there over time.



Option 4: Add a new function sys.swap_module_internals, which takes
two module objects and swaps their __dict__ and other attributes. By
making the operation a swap instead of an assignment, we avoid the
lifecycle pitfalls from Option 3. By making it a builtin, we can make
sure it always handles all the module fields that matter, not just
__dict__. Usage:

   new_module = MyModuleSubclass(...)
   sys.swap_module_internals(new_module, sys.modules[__name__])
   sys.modules[__name__] = new_module

Option 4 downside: Obviously a hack.



Option 3 or 4 both seem workable, it just depends on which way we
prefer to hold our nose. Option 4 is slightly more correct in that it
works for *all* modules, but OTOH at the moment the only time Option 3
*really* fails is for compiled modules with PEP 3121 metadata, and
compiled modules can already use a module subclass via other means
(since they instantiate their own module objects).

Thoughts? Suggestions on other options I've missed? Should I go ahead
and write a patch for one of these?

-n

-- 
Nathaniel J. Smith
Postd

Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-28 Thread Chris Angelico
On Sat, Nov 29, 2014 at 12:59 PM, Nathaniel Smith  wrote:
> Option 4: Add a new function sys.swap_module_internals, which takes
> two module objects and swaps their __dict__ and other attributes. By
> making the operation a swap instead of an assignment, we avoid the
> lifecycle pitfalls from Option 3. By making it a builtin, we can make
> sure it always handles all the module fields that matter, not just
> __dict__. Usage:
>
>new_module = MyModuleSubclass(...)
>sys.swap_module_internals(new_module, sys.modules[__name__])
>sys.modules[__name__] = new_module
>
> Option 4 downside: Obviously a hack.

This one corresponds to what I've seen in quite a number of C APIs.
It's not ideal, but nothing is; and at least this way, it's clear that
you're fiddling with internals. Letting the interpreter do the
grunt-work for you is *definitely* preferable to having recipes out
there saying "swap in a new __dict__, then don't forget to clear the
old module's __dict__", which will have massive versioning issues as
soon as a new best-practice comes along; making it a function, like
this, means its implementation can smoothly change between versions
(even in a bug-fix release).

Would it be better to make that function also switch out the entry in
sys.modules? That way, it's 100% dedicated to this job of "I want to
make a subclass of module and use that for myself", and could then be
made atomic against other imports. I've no idea whether there's any
other weird shenanigans that could be deployed with this kind of
module switch, nor whether cutting them out would be a good or bad
thing!

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] advice needed: best approach to enabling "metamodules"?

2014-11-28 Thread Guido van Rossum
Are these really all our options? All of them sound like hacks, none of
them sound like anything the language (or even the CPython implementation)
should sanction. Have I missed the discussion where the use cases and
constraints were analyzed and all other approaches were rejected? (I might
have some half-baked ideas, but I feel I should read up on the past
discussion first, and they are probably more fit for python-ideas than for
python-dev. Plus I'm just writing this email because I'm procrastinating on
the type hinting PEP. :-)

--Guido

On Fri, Nov 28, 2014 at 7:45 PM, Chris Angelico  wrote:

> On Sat, Nov 29, 2014 at 12:59 PM, Nathaniel Smith  wrote:
> > Option 4: Add a new function sys.swap_module_internals, which takes
> > two module objects and swaps their __dict__ and other attributes. By
> > making the operation a swap instead of an assignment, we avoid the
> > lifecycle pitfalls from Option 3. By making it a builtin, we can make
> > sure it always handles all the module fields that matter, not just
> > __dict__. Usage:
> >
> >new_module = MyModuleSubclass(...)
> >sys.swap_module_internals(new_module, sys.modules[__name__])
> >sys.modules[__name__] = new_module
> >
> > Option 4 downside: Obviously a hack.
>
> This one corresponds to what I've seen in quite a number of C APIs.
> It's not ideal, but nothing is; and at least this way, it's clear that
> you're fiddling with internals. Letting the interpreter do the
> grunt-work for you is *definitely* preferable to having recipes out
> there saying "swap in a new __dict__, then don't forget to clear the
> old module's __dict__", which will have massive versioning issues as
> soon as a new best-practice comes along; making it a function, like
> this, means its implementation can smoothly change between versions
> (even in a bug-fix release).
>
> Would it be better to make that function also switch out the entry in
> sys.modules? That way, it's 100% dedicated to this job of "I want to
> make a subclass of module and use that for myself", and could then be
> made atomic against other imports. I've no idea whether there's any
> other weird shenanigans that could be deployed with this kind of
> module switch, nor whether cutting them out would be a good or bad
> thing!
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com