Re: Python in CS1
On Sat, May 21, 2011 at 09:30, Franck Ditter wrote: > Except at MIT, who knows some good CS1 references for teaching Python ? James Shuttleworth and I did a lot of this at Coventry, the book Python for Rookies came from that. We don't use Python in CS1 at Wolverhampton, but James is still actively using our old syllabus (or a variant). You can reach him here: http://dis-dot-dat.net/ Sarah -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -- http://mail.python.org/mailman/listinfo/python-list
Call for Papers - Programming Language Evolution Workshop 2015
2nd Workshop on Programming Language Evolution (PLE) 2015 (colocated with ECOOP 2015, Prague, Czech Republic) http://2015.ecoop.org/track/PLE-2015-papers Call for papers --- Programming languages tend to evolve in response to user needs, hardware advances, and research developments. Language evolution artefacts may include new compilers and interpreters or new language standards. Evolving programming languages is however challenging at various levels. Firstly, the impact on developers can be negative. For example, if two language versions are incompatible (e.g., Python 2 and 3) developers must choose to either co-evolve their codebase (which may be costly) or reject the new language version (which may have support implications). Secondly, evaluating a proposed language change is difficult; language designers often lack the infrastructure to assess the change. This may lead to older features remaining in future language versions to maintain backward compatibility, increasing the language's complexity (e.g., FORTRAN 77 to Fortran 90). Thirdly, new language features may interact badly with existing features, leading to unforeseen bugs and ambiguities (e.g., the addition of Java generics). This workshop brings together researchers and developers to tackle the important challenges faced by programming language evolution, to share new ideas and insights, and to advance programming language design. Topics include (but are not limited to): * Programming language and software co-evolution * Empirical studies and evidence-driven evolution * Language-version integration and interoperation * Historical retrospectives and experience reports * Tools and IDE support for source-code mining and refactoring/rejuvenation * Gradual feature introductions (e.g., optional type systems) We are accepting two kinds of submission: * Full papers (maximum 8 pages, ACM SIGPLAN 2 column, 9pt) * Talk abstracts (may include an extended abstract, upto 3 pages in ACM SIGPLAN format). We are proud to be supported by the Software Sustainability Institute (http://software.ac.uk) Submission and publication -- Please submit your abstracts/papers via EasyChair (https://easychair.org/conferences/?conf=ple15). Papers will be subject to full peer review, and talk abstracts will be subject to light peer-review/selection. Accepted submissions will be published in the ACM DL. Any paper submitted must adhere to ACM SIGPLAN's republication policy. If you have any questions relating to the suitability of a submission please contact the program chairs at [email protected]. Important dates --- All deadlines are 'anywhere-on-Earth'. * Submission: Thursday 2nd April 2015 * Notification: Friday 1st May 2015 * Workshop: Tuesday 7th July 2015 Workshop format --- The workshop schedule will comprise presentations given for accepted papers, short talks, and a keynote presentation by Bjarne Stroustrup. Depending on submissions, an afternoon discussion may be included. Workshop organisation - Program chairs: * Raoul-Gabriel Urma ([email protected]) * Dominic Orchard ([email protected]) General chair: * Alan Mycroft Program committee: * Heather Miller (Ecole Polytechnique Federale de Lausanne, Switzerland) * Sarah Mount (University of Wolverhampton, UK) * Alan Mycroft (University of Cambridge, UK) * Dominic Orchard (co-chair) (Imperial College London, UK) * Jeff Overbey (Auburn University, AL, US) * Max Schaefer (Semmle Ltd., Oxford, UK) * Raoul-Gabriel Urma (co-chair) (University of Cambridge, UK) -- Dr. Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -- https://mail.python.org/mailman/listinfo/python-list
Python debuggers with sys.settrace()
This is a bit of an odd question, but is there any way for a Python debugger to suppress I/O generated by the program which is being debugged? I guess an "obvious" thing to do would be to replace core parts of the standard library and change any relevant imports in the locals and globals dicts to fake ones which don't generate I/O, but this seems brittle as the standard library will change over time. Is it possible to modify the byte-compiled code in each stack frame? Or is there a simpler way to do this? Many thanks, Sarah -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -- http://mail.python.org/mailman/listinfo/python-list
Re: Python debuggers with sys.settrace()
On 5 May 2010 10:17, Carl Banks wrote:
> On May 2, 11:06 am, Sarah Mount wrote:
>> This is a bit of an odd question, but is there any way for a Python
>> debugger to suppress I/O generated by the program which is being
>> debugged? I guess an "obvious" thing to do would be to replace core
>> parts of the standard library and change any relevant imports in the
>> locals and globals dicts to fake ones which don't generate I/O, but
>> this seems brittle as the standard library will change over time. Is
>> it possible to modify the byte-compiled code in each stack frame? Or
>> is there a simpler way to do this?
>
> It's not foolproof but you could try to reassign sys.stdout and
> sys.stderr to a bit bucket ("sys.stdout = open(os.devull)"), then
> invoke the debugger with stdout set to sys._stdout (the actual
> stdout). You'll have to create the Pdb() by hand since the built-in
> convience functions don't do it. Check the file pdb.py for details.
>
Thanks Carl. I had considered this, but it won't catch things like
socket communication. Hmmm.
Cheers,
Sarah
--
Sarah Mount, Senior Lecturer, University of Wolverhampton
website: http://www.snim2.org/
twitter: @snim2
--
http://mail.python.org/mailman/listinfo/python-list
Re: Will python never intend to support private, protected and public?
> 1) Something that fixes the broken name mangling in the current
> system, but still doesn't try to defeat intentional unmangling.
> Currently, if you have a class with the same name as one of its
> superclasses, the name mangling can fail even its existing purpose of
> preventing accidental collisions.
None of what follows offers any protection against malice, but at least
"normal" use will be safe. Are there any other idioms where __dict__s
are used as dumping grounds for strange objects, in particular
superclasses of that object?
(For this to work the instance needs a __dict__; fully __slots__'d
objects won't be amenable to it)
Instead of mangling in the class name:
class C:
def __init__(self, x):
self.__x = x
One could mangle in the class itself:
class C:
def __init__(self, x):
self.__dict__[C, 'x'] = x
(and retrieve similarly). Now classes with the same name are still
regarded as distict. A variation on this is to have a single __dict__
entry for the class, which refers to another dict, so we would have
something like
def __init__(self, ...):
privates = { }
self.__dict__[C] = privates
# ...
def f(self...):
privates = self.__dict__[C]
# ...
and then get or set privates[varName] as needed. (Or possibly use it
lazily via setdefault if you have a good idea of the what the defaults
should be.)
However this means non-strings end up in dir() which may cause things
like map str.upper, dir(C())) to fail. Thus instead of C as the key we
could use `id(C)`.
The only chance for collision I can see in the `id` solution is if an
object changes its __class__, then the old class is GC'd, a new one is
loaded with the same address as the old, and the __class__ is changed to
(a subclass of) this new one. Frankly __class__ reassignment is fairly
esoteric, and already has the question of how well the __dict__ from the
old type masquerades as a new one.
Other scope for collision: if the classes in question are instances of
some mad metaclass which defines == to mean something other than
identity testing (is). This would only be needed if the objects weren't
just used as types, but also something else as well (e.g. collections of
all their instances, implemented via weakrefs).
Note that this won't quite work for private data in new classes, as
their __dict__ is read-only, but I offer the following peculiar hack:
class K(object):
__dict__ = {}
# ...
Any normal accesses to K.__dict__ will reach the wrapping dictproxy
object, but K.__dict__['__dict__'] is all yours to play with. If you
start by assigning __dict__ to locals() in the class initializer, then
writes to K.__dict__['__dict__'] will be visible just through
K.__dict__. Whether that's desirable or not depends on you. (locals()
isn't what you want if you're creating truly dynamic types via
type(names, bases, dict), though.)
Paranoid/super-open-minded types might wish to replace accesses to
__dict__ here with calls to object.__getattribute__(...,'__dict__') in
case any subclasses have strange semantics (which may make perfect sense
as far as they're concerned, but still potentially be a problem for you
having to second-guess them.)
Hope this helps
John
PS Only slightly relevant: when checking out the possibilties for
private attrs on types, I ran up against dictproxies. What need did they
meet over plain dicts, why are they read-only (at the Python and CPython
level), why are they not instantiable directly (though you can always
just create a temporary type and steal its __dict__), and why is type's
__dict__ is read-only but a class's isn't? I just have down'n'dirty type
dict munging to do, and it seems retrograde that it only works with old
(and nominally less flexible) classes.
--
http://mail.python.org/mailman/listinfo/python-list
[regex] case-splitting strings in unicode
I have to split some identifiers that are casedLikeThis into their component words. In this instance I can safely use [A-Z] to represent uppercase, but what pattern should I use if I wanted it to work more generally? I can envisage walking the string testing the unicodedata.category of each char, but is there a regex'y way to denote "uppercase"? Thanks John -- http://mail.python.org/mailman/listinfo/python-list
wxPython Licence vs GPL
we have some Python code we're planning to GPL. However, bits of it were cut&pasted from some wxPython-licenced code to use as a starting point for implementation. It is possible that some fragments of this code remains unchanged at the end. How should we refer to this in terms of copyright statements and bundled Licence files? Is there, say, a standard wording to be appended to the GPL header in each source file? Does the original author need to be named as one of the copyright holders, or that ours is a derivative work from his? Which of these would be required under the terms of the Licence, and which by standard practice / courtesy? (This assumes the wxPython Licence is compatible with the GPL -- if not, do we just cosmetically change any remaining lines, so none remain from the orignal?) Thanks John -- http://mail.python.org/mailman/listinfo/python-list
Re: wxPython Licence vs GPL
> IIRC, wxPython license has nothing to do with GPL. Its license is far more > "free" than GPL is. If you want to create commercial apps with wxPython, you > can do it without messing with licenses. This isn't a commercial app though, it's for a research project and apparently it's a requirement that we have to GPL it. It's a question of what we need to add to our code to ensure that those fragments of someone else's wxPython-licenced code in our GPL'ed app to ensure that no license is transgressed, copyright violated or unatttributed or anyone having grounds to feel hard done by. -- http://mail.python.org/mailman/listinfo/python-list
"static" data descriptors and possibly spurious calls to __set__?
I have a peculiar problem that, naturally, I cannot reproduce outside
the fairly long-winded code that spawned it. Thus the code snippets here
should be taken as shorthand for what I think are the relevant bits,
without actually leading to the erroneous (or at least undesired)
behaviour described.
The problem is that, in attempting to add an attribute to a subclass, a
synonymous attribute in the base
class is looked up instead, and its __set__ method invoked. The actual
setting is done via type.__setattr__(cls, attr, value).
The question is, under what circumstances can an attempt to add an
attribute to a derived class be interpreted as setting a descriptor of
the same name in the base class, and how (under those circumstances) can
one coax the desired behaviour out of the interpreter? It's no good just
stipulating that the derived classes are populated first to avoid this
sort of name-shadowing, as new classes will be dynamically discovered by
the user, and anyway existing classes can be repopulated by the user at
any time.
This slightly contrived example comes about in part from a need to have
"static" data descriptors, where an assignment to the descriptor via the
class (as opposed to via an instance), result in a call to its __set__,
rather than replacing the descriptor in the class's __dict__. (If you
really wanted to replace the descriptor, you'd have to del it explicitly
before
assigning to the attribute, and this is in fact done elsewhere in the
code where needed.) I do this by having a special metaclass whose
__setattr__ intercepts such calls:
class Meta(type):
def __setattr__(self, attr, value):
if attr not in self.__dict__ or isSpecial(attr):
# the usual course of action
type.__setattr__(self, attr, value)
else:
self.__dict__[attr].__set__(None, value)
(The isSpecial() function checks for __special_names__ and other things
that should be hadled as normal). I've subsequently found that
applications such as jpype have successfully used this approach, so it
doesn't seem to be a bad idea in itself.
Then, if we have a descriptor...
class Descr(object):
def __get__(self, inst, owner=None):
print "Getting!", self, inst, owner
return 17
def __set__(self, inst, value):
print "Setting", self, inst, value
...and then we have a (dynamically-created) class heirarchy...
Base = Meta('Base', (), {})
Derived = Meta('Derived', (Base,), {})
...then we (dynamically) assign a static descriptor to Base...
setattr(Base, 'x', Descr())
...and then to Derived...
setattr(Derived, 'x', Descr())
...it's this last one that causes the problem. In the real code, the
call to type.__setattr__ referred to above seems to lead to a call to
something like cls.__base__.__dict__[attr].__set__(cls, value). It
doesn't appear possible to recover from this from within the
descriptor's __set__ itself, as any attempt to set the class attribute
from there naturally leads to recursion. I can't set the class's
__dict__ directly, as it's a dictproxy and only exposes a read-only
interface (why is that, by the way?).
One further factor that may be relevant (though I couldn't see why), is
that the metaclass in question is set up (via __class__ reassignment) to
be an instance of itself, like the builtin type.
Any enlightenment on this matter would be gratefully received.
Thanks
John.
--
http://mail.python.org/mailman/listinfo/python-list
Block-structured resource handling via decorators
When handling resources in Python, where the scope of the resource is known, there seem to be two schools of thought: (1) Explicit: f = open(fname) try: # ... finally: f.close() (2) Implicit: let the GC handle it. I've come up with a third method that uses decorators to achieve a useful mix between the two. The scope of the resource is clear when reading the code (useful if the resource is only needed in part of a function), while one does not have to write an explicit cleanup. A couple of examples: @withFile(fname, 'w') def do(f): # ... write stuff into file f ... @withLock(aLock): def do(): # ... whatever you needed to do once the lock was acquired, # safe in the knowledge it will be released afterwards ... (The name "do" is arbitrary; this method has the "mostly harmless" side-effect of assigning None to a local variable with the function name.) I find it clear because I come from a C++/C#/Java background, and I found C#'s using-blocks to very useful, compared to the explicit finallys of Java. I know that Python's deterministic finalization sort of achieves the same effect, but I had been led to believe there were complications in the face of exceptions. The implementation is easily extensible: a handler for a new type of resource can be written in as a couple of lines. For the examples above: class withFile(blockScopedResource): init, cleanup = open, 'close' It's so simple I was wondering why I haven't seen it before. Possibly: it's a stupid idea and I just can't see why; everyone knows about it except me; it's counter-intuitive (that's not the way decorators were intended); it's "writing C# in Python" or in some other way unPythonic; I've actually had an idea that is both Original and non-Dumb. If the last is the case, can someone let me know, and I'll put up the code and explain how it all works. On the other hand, if there is something wrong with it, please can someone tell me what it is? Thanks John Perks -- http://mail.python.org/mailman/listinfo/python-list
Re: Block-structured resource handling via decorators
> The only cases I see the first school of thought is when the resource
> in question is "scarce" in some way.
By "resource" I meant anything with some sort of acquire/release
semantics. There may be plenty of threading.Locks available, but it's
still important that a given Lock is released when not needed.
For example, most OS's place a
> > class withFile(blockScopedResource):
> > init, cleanup = open, 'close'
> Well, I'd say that using a string for cleanup and a function for init
> is unpythonic.
I could have specified cleanup as lambda f:f.close(), but as I thought
it might be quite common to call a method on the resourse for cleanup,
if a string is specified a method of that name is used instead.
> The question is whether having to turn your scope into a
> function to do this is more trouble than it's worth.
Needing one slightly contrived-looking line (the def) vs a try-finally
block with explicit cleanup code? I know which I'd prefer, but for all I
know I could in a minority of 1 here.
> I'd certainly be interested in seeing the implementation.
And so you shall...
I start with the base class. It does all the work, everything else is
just tweaks for convenience. Normally, then, you wouldn't need to bother
with all the __init__ params.
class blockScopedResource(object):
def __init__(self, init, cleanup,
initArgs, initKwargs, cleanupArgs, cleanupKwargs,
passResource, resourceIsFirstArg):
self.init = init # function to get resource
self.cleanup = cleanup # function to release resource
self.initArgs, self.initKwargs = initArgs, initKwargs
self.cleanupArgs, self.cleanupKwargs = cleanupArgs,
cleanupKwargs
self.passResource = passResource # whether resource is passed
into block
self.resourceIsFirstArg = resourceIsFirstArg # whether resource
is arg to init,
# rather than returned from it
def __call__(self, block):
resource = self.init(*self.initArgs, **self.initKwargs)
if self.resourceIsFirstArg:
resource = self.initArgs[0]
try:
if self.passResource:
block(resource)
else:
block()
finally:
self.cleanup(resource, *self.cleanupArgs,
**self.cleanupKwargs)
But this still won't do conveniently for files and locks, which are my
motivating examples.
The simpleResource class constructor gets its setup from attributes on
the type of the object being created, with sensible defaults being set
on simpleResource itself. As stated above, if a string is supplied as
init or cleanup, it is treated as a method name and that method is used
instead.
def stringToMethod(f):
# Getting the attribute from the class may have wrapped it into
# an unbound method; in this case, unwrap it
if isinstance(f, types.MethodType) and f.im_self is None:
f = f.im_func
if not isinstance(f, basestring): return f
def helper(resource, *args, **kwargs):
return getattr(resource, str(f))(*args, **kwargs)
return helper
class simpleResource(blockScopedResource):
def __init__(self, *initArgs, **initKwargs):
# get attributes off type
t = type(self)
blockScopedResource.__init__(self,
stringToMethod(t.init), stringToMethod(t.cleanup),
initArgs, initKwargs, t.cleanupArgs, t.cleanupKwargs,
t.passResource, t.resourceIsFirstArg)
# defaults supplied here
cleanupArgs, cleanupKwargs = (), {}
passResource = True
resourceIsFirstArg = False
Then useful implementations can be written by:
class withFile(simpleResource):
init, cleanup = open, 'close'
class withLock(simpleResource):
init, cleanup = 'acquire', 'release'
passResource = False
resourceIsFirstArg = True
And new ones can be created with a similar amount of effort.
Of course, one-liners can be done without using the decorator syntax:
withLock(aLock)(lambda:doSomething(withAnArg))
Gotcha: If you stack multiple resource-decorator it won't do what you
want:
# !!! DOESN'T WORK !!!
@withLock(aLock)
@withLock(anotherLock)
def do():
# ...
Either nest them explicitly (causing your code you drift ever further to
the right):
@withLock(aLock)
def do():
@withLock(anotherLock)
def do():
# ...
Or come up with a multiple-resource handler, which shouldn't be too
hard:
@withResources(withLock(aLock), withLock(anotherLock),
withFile('/dev/null'))
But I'll get round to that another day.
--
http://mail.python.org/mailman/listinfo/python-list
Use of descriptor __get__: what defines an object as being a class?
I'm talk from the point of view of descriptors. Consider a.x = lambda self:None # simple function When a.x is later got, what criterion is used to see if a class (and so the func would have __get__(None, a) called on it)? Pre-metaclasses, one might assume it was isinstance(a, (types.TypeType, types.ClassType)) but nowadays the type of a type could be anything. Is there a simple Python-level interface to determine what constitutes a type, as there is for, say, iterators? Could an object start being a type just by appropriately setting certain members, or is this distinction enforced at a lower level? To confuse matters further, the descriptor docs in 2.4 say that descriptors can only be bound to objects, not classic class instances, but a few moments with a Python console disproves this (it is the descriptors themselves that have to be objects.) >>> def getting(*a): print "Getting!" >>> class oldDesc: __get__ = getting >>> class newDesc(object): __get__ = getting >>> class Q:pass # classic class >>> Q.x = oldDesc() >>> Q.x <__main__.oldDesc instance at 0x009ACAA8> >>> Q.y = newDesc() >>> Q.y Getting! This seems to be a more sensible state of affairs (otherwise function-to-method wrapping would have to be a special case), but http://www.python.org/doc/2.4/ref/descriptor-invocation.html says different. What is the canonical distinction between classes and types, and between instances and objects? How would it apply in the face of such perversions as from types import * class C1: __metaclass__ = TypeType class C2(object): __metaclass__ = ClassType or reassignment of __class__ from an class to a type or vice versa? I would have thought that "most of the time" (i.e. excluding the above tricksiness) all that would be needed was isinstance(x, object) but this returns true when x is an Exception, which is guaranteed to be an instance and hence not an object. Thanks John -- http://mail.python.org/mailman/listinfo/python-list
dynamic loading of code, and avoiding package/module name collisions.
Long story short: what I'm looking for is information on how have a Python app that: * embeds an editor (or wxNoteBook full of editors) * loads code from the editors' text pane into the app * executes bits of it * then later unloads to make way for an edited version of the code. The new version needs to operate on a blank slate, so reload() may not be appropriate. One of the reasons I am asking is that I am writing a simulator, where the behaviour of the objects is written in Python by the user (with callbacks into the simulator's API to update the display). How should I best go about this, with respect to loading (and unloading) of user code? It is expected that each project will take the form of a Python package, with some files therein having standard names for particular purposes (presumably __init__.py will be one of these), and the rest being user code and data files. It is not expected that we restrict the user's package hierarchy to be only one level deep. Given the name of a package: How can we be sure it doesn't conflict with a Python package/module name? What if a new module/package is added (e.g. to site-packages) that has the same name? One possibility that occurred to me was having a package MyAppName.UserProject as part of the app, with the user's project root package occurring as a subpackage in there, and being reloaded (either via reload() or a method on imp) each time we re-start the simulation. One reason for this was to avoid top-level name collisions. Is this a good way to go about it, and if so, how should users' imports refer between modules in the same package, and across sub-packages? Can someone please explain the interaction between the relevant parts of the imp module (which is presumably what I'll be needing) and: the import lock; sys.modules; sys.path? Is removing a module from sys.modules tantamount to uninstalling it? In particular, will imp.load_module() then create a new one? When cheking the user's intended filenames for validity as module names, is the canonical regexp to use [a-zA-Z_][a-zA-Z0-9_]*, or is it something more subtle? In general, on deciding on a module file name, how can one be sure that it doesn't conflict with another package (either standard or third-party)? In particular, if I have a thing.py file in my local dir, and then Python 2.5 brings out a (possibly undocumented) standard module called thing, how can I avoid problems with running python from my local dir? (As an experiment, I put a file called ntpath.py in my local dir, and I couldn't import site.) To make matters worse, I may be unaware of this Python upgrade if it is done by the sysadmin. The same applies with new keywords in Python, but this may be ameliorated by their gradual introduction via __future__. If all this wasn't complicated enough, at some point we'll want to let the users write their code in Java as well (though not mixing langauges in one project; that would make my brain hurt). Can someone point to a resource that will list the issues we should be aware of from the start? (Googling just gets references to Jython and jPype, rather than anything than, say, a list of gotchas). Thank you John -- http://mail.python.org/mailman/listinfo/python-list
wxPython OGL: How make LineShape text b/g transparent?
(Having problems receiving wxPython mailing list entries, so I'll ask here.) I'm using wxPython 2.5.4, windows ansi version, on Python 2.4, the OS in Win98SE. My application manipulates a graph, with CircleShapes for nodes and LineShapes for arcs. The user needs to be able to get and set text on a arc. When I use LineShape.AddText(), a large rectangle of background colour blocks out much of the line, crucially including the arrow at the midpoint. How I can I make the text b/g transparent so the lines can be seen, and further is it possible programmatically to displace the text such that the arrow is not covered? Thanks John -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiple threads in a GUI app (wxPython), communication between worker thread and app?
"fo" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > This is a network app, written in wxPython and the socket module. This > is what I want to happen: I'm not sure if this will help you, but it solved what was, for me, a more general problem: not (normally) being able to issue wxPython calls outside the GUI thread. I came up with a general-purpose thread-switcher, which, given a callable, would on invocation: queue itself up on the GUI event queue call its callable in the GUI thread (allowing arbitrary wxPython calls) pass its result back to the calling thread (or re-raise any exception there). Instead of having a dedicated queue, it uses one already in place. Because all calls using it are serialized, it had the beneficial side-effect (for me, anyway)of avoiding certain concurrency issues. (The calls to my locks module, CheckPause() and CheckCancel(), were there so the user could suspend, resume, and cancel worker threads at will, which the Python threading module does not naturally support(my locks module held some state that could be set via the GUI.) If you have no need of that, delete those lines and everything should still work (they were a late addition). import wx, threading, types import locks # my code, see remark above #- # decorator used to call a method (or other callable) # from the wxPython main thread (with appropriate switching) #-- class wxThreadSwitch(object): def __init__(self, callable): object.__init__(self) self.callable = callable def __get__(self, inst, owner=None): c = self.callable # if c is a descriptor then wrap it around # the instance as would have happened normally if not isinstance(c, types.InstanceType): try: get = c.__get__ args = [inst] if owner is not None: args.append(owner) return wxThreadSwitch(get(*args)) except AttributeError: pass # if we get here, then not a descriptor, # so return self unchanged return self def __call__(self, *args, **kwargs): if wx.Thread_IsMain(): return self.callable(*args, **kwargs) locks.CheckPause() c = self.__wxThreadCall(self.callable) wx.CallAfter(c, *args, **kwargs) return c.Result() class __wxThreadCall(object): def __init__(self, callable): assert not wx.Thread_IsMain() object.__init__(self) self.callable = callable self.result = None self.exc_info = None self.event = threading.Event() def __call__(self, *args, **kwargs): try: try: assert wx.Thread_IsMain() assert not self.event.isSet() locks.CheckCancel() self.result = self.callable(*args, **kwargs) except: self.exc_info = sys.exc_info() finally: self.event.set() def Result(self): self.event.wait() if self.exc_info: type, value, traceback = self.exc_info raise type, value, traceback return self.result A usage example would be to decorate a function or method with it: class Something: @wxThreadSwitch def someGUICallOrOther(): Here the method call would run via the wxThreadSwitch decorator which would do any necessary thread switching. Hope this helps John -- http://mail.python.org/mailman/listinfo/python-list
MRO problems with diamond inheritance?
Trying to create the "lopsided diamond" inheritance below: >>> class B(object):pass >>> class D1(B):pass >>> class D2(D1):pass >>> class D(D1, D2):pass Traceback (most recent call last): File "", line 1, in ? TypeError: Error when calling the metaclass bases Cannot create a consistent method resolution order (MRO) for bases D1, D2 Is this as intended? Especially since reversing the order makes it OK: >>> class D(D2, D1):pass >>> D.__mro__ (, , , , ) Why should order of base classes matter? This only affects types, old-style classes are unaffected. The workaround to this problem (if it is a problem and not a feature that I'm misunderstanding) is to put more-derived classes at the front of the list. I ran into this with dynamically-created classes, and sorting the bases list with the following comparator put them right: def cmpMRO(x,y): if x == y: return 0 elif issubclass(x, y): return -1 elif issubclass(y, x): return +1 else: return cmp(id(x), id(y)) Incidentally, is it safe to have an mro() method in a class, or is this as reserved as the usual __reserved_words__? Does Python use mro() (as opposed to __mro__) internally or anything? Thanks John -- http://mail.python.org/mailman/listinfo/python-list
Re: MRO problems with diamond inheritance?
"Michele Simionato" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> BTW, what it your use case? I have yet to see a single compelling use
> case for multiple inheritance, so I am
> curious of what your design is.
Before I start, I should mention that my workaround I listed previously
is bogus, but I think this:
listOfBases.sort(reverse=True, key=lambda b:len(b.__mro__))
has the effect of pulling the most derived classes to the front of the
bases list, which at any rate stops the exception being thrown.
My compelling use case for MI in Python is... better to model MI in Java
:)
Well, I'll come back to that. In the "real" world, I have found
single-implementation-multiple-interface inheritance (as presented by
Java and .NET) to be highly expressive while avoiding some of the issues
of multiple-implementation inheritance as presented by C++. (Though I
don't agree with Java's FUD that multpile-implementation iheritance is a
bad design choice simply because Java doesn't support it.) In one
application, we had a simple (tree-like, non-MI) hierarchy of abstract
interfaces which would be used by the API's clients:
class Thing { } // abstract
class SpecialThing : virtual public Thing { } // abstract
and the concrete subclasses (that were not exposed via the API) mirrored
it:
class ThingImpl : virtual public Thing { }
class SpecialThingImpl : virtual public SpecialThing, virtual public
ThingImpl { }
(That said, that same job sent me on a training course to learn not to
use inheritance *at* *all*, but rather cut'n'paste multiple copies of
the same code and maintain them in parallel. I got laid off when it
transpired that the entire project was just a figment of somebody's
imagination.)
I'm currently writing a package to embed Java in Python (another one,
just what the world needs), and I use Python's dynamic type creation to
"grow" a Python type hierarchy mirroring that of the underlying Java
object, which is where I'd been running into the MRO problems that
kicked off this whole thread. The Java classes are represented by Python
types, where the one for class Class is also the metaclass of all the
Java objects. Aside from those though, I've now had to model Java's MI
in both C++ and Python and found it to be surprisingly painless on both
occasions.
(In cas you're interested, it all works via:
[Python client code] --> pyJav(Python) --> Boost.Python -->
pyJav._core(C++) --> MyJNI++ --> JNI --> [Java code]
where all the stuff you've never heard of is mine. I'm aiming to keep
the pyJav C++ layer as thin as possible, and do everything I can in
Python.)
John.
--
http://mail.python.org/mailman/listinfo/python-list
UTF16 codec doesn't round-trip?
(My Python uses UTF16 natively; can someone with UTF32 Python let me know if that behaves differently?) >>> import codecs >>> u'\ud800' # part of surrogate pair u'\ud800' codecs.utf_16_be_encode(_)[0] '\xd8\x00' codecs.utf_16_be_decode(_)[0] Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'utf16' codec can't decode bytes in position 0-1: unexpected end of data If the ascii can't be recognized as UTF16, then surely the codec shouldn't have allowed it to be encoded in the first place? I could understand if it was trying to decode ascii into (native) UTF32. On a similar note, if you are using UTF32 natively, are you allowed to have raw surrogate escape sequences (paired or otherwise) in unicode literals? Thanks John -- http://mail.python.org/mailman/listinfo/python-list
