Re: [Python-Dev] [python] Should we do away with unbound methods in Py3k?
Michael Foord wrote: > Guido van Rossum wrote: > >> I'm asking a Py3k question on python-dev because I'd like to have >> opinions from people who haven't thought about Py3k much yet. Consider >> the following example: >> >> class C: >> def foo(self): pass >> >> C.foo(42) >> >> This currently fails with this error message: >> >> TypeError: unbound method foo() must be called with C instance as >> first argument (got int instance instead) >> >> This message is called when isinstance(self, C) returns False, where >> self is the first argument passed to the unbound method. >> >> That's nice, but there is a cost associated with this: the expression >> "C.foo" can't just return the function object "foo", it has to wrap it >> in an unbound method object. In Py3k the cost of calling an unbound >> method object goes up, because the isinstance() check may be >> overloaded. This typically happens when the class C uses the special >> metaclass (abc.ABCMeta) used for virtual inheritance (see PEP 3119). >> in Py3k the I/O stream classes are perhaps the most common use case. >> >> Given that the error is of limited value and that otherwise the >> unbound method behaves exactly the same as the original function >> object, I'd like to see if there are strenuous objections against >> dropping unbound method objects altogether (or at least not using them >> in this case), so that explicit super calls (via the unbound method) >> may go a little faster. Also, it would make it easier to fix this >> issue: http://bugs.python.org/issue1109 >> >> > > On occasions I've found it a drag that you *can't* call unbound methods > with a different type. Python normally allows duck typing and this is > one place it actually places type restrictions... > > I'd be happy to see this restriction go. :-) > > Michael Foord > > > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com > +1 to getting rid of unbound methods! ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Global Python Sprint Weekends: May 10th-11th and June 21st-22nd.
Anybody in Melbourne keen for this? Not sure if I'll be able to make it myself, but I'd be interested to know if there's anybody in the area keen to do the sprint. Cheers, T Tarek Ziadé wrote: > On Wed, Apr 16, 2008 at 8:40 PM, Michael Foord > <[EMAIL PROTECTED]> wrote: > >> Trent Nelson wrote: >> > Following on from the success of previous sprint/bugfix weekends and >> > sprinting efforts at PyCon 2008, I'd like to propose the next two >> > Global Python Sprint Weekends take place on the following dates: >> > >> > * May 10th-11th (four days after 2.6a3 and 3.0a5 are released) >> > * June 21st-22nd (~week before 2.6b2 and 3.0b2 are released) >> > >> > It seems there are a few of the Python User Groups keen on meeting >> > up in person and sprinting collaboratively, akin to PyCon, which I >> > highly recommend. I'd like to nominate Saturday across the board >> > as the day for PUGs to meet up in person, with Sunday geared more >> > towards an online collaboration day via IRC, where we can take care >> > of all the little things that got in our way of coding on Saturday >> > (like finalising/preparing/reviewing patches, updating tracker and >> > documentation, writing tests ;-). >> > >> > For User Groups that are planning on meeting up to collaborate, >> > please reply to this thread on python-dev@python.org and let every- >> > one know your intentions! >> > >> > >> >> I should be able to help organise and attend the London contribution. >> Personally I'd like to work on the documentation changes / clean-up for >> the unittest module discussed recently. >> > > We are trying to set up a team here in Paris, > > Personnally I would like to continue the work started in distutils > (various patches) > and some friends here are interested in contributing on documentation. > > Tarek > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Global Python Sprint Weekends: May 10th-11th and June 21st-22nd.
Anyone in Melbourne, Australia keen for the first sprint? I'm not sure if I'll be available, but if I can it'd be great to work with some others. Failing that, it's red bull and pizza in my lounge room :) I've been working on some neat code for an AST optimizer. If I'm free that weekend, I'll probably continue my work on that. Cheers, T Trent Nelson wrote: Following on from the success of previous sprint/bugfix weekends and sprinting efforts at PyCon 2008, I'd like to propose the next two Global Python Sprint Weekends take place on the following dates: * May 10th-11th (four days after 2.6a3 and 3.0a5 are released) * June 21st-22nd (~week before 2.6b2 and 3.0b2 are released) It seems there are a few of the Python User Groups keen on meeting up in person and sprinting collaboratively, akin to PyCon, which I highly recommend. I'd like to nominate Saturday across the board as the day for PUGs to meet up in person, with Sunday geared more towards an online collaboration day via IRC, where we can take care of all the little things that got in our way of coding on Saturday (like finalising/preparing/reviewing patches, updating tracker and documentation, writing tests ;-). For User Groups that are planning on meeting up to collaborate, please reply to this thread on python-dev@python.org and let every- one know your intentions! As is commonly the case, #python-dev on irc.freenode.net will be the place to be over the course of each sprint weekend; a large proportion of Python developers with commit access will be present, increasing the amount of eyes available to review and apply patches. For those that have an idea on areas they'd like to sprint on and want to look for other developers to rope in (or just to communicate plans in advance), please also feel free to jump on this thread via python-dev@ and indicate your intentions. For those that haven't the foggiest on what to work on, but would like to contribute, the bugs tracker at http://bugs.python.org is the best place to start. Register an account and start searching for issues that you'd be able to lend a hand with. All contributors that submit code patches or documentation updates will typically get listed in Misc/ACKS.txt; come September when the final release of 2.6 and 3.0 come about, you'll be able to point at the tarball or .msi and exclaim loudly ``I helped build that!'', and actually back it up with hard evidence ;-) Bring on the pizza and Red Bull! Trent. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Module Suggestion: ast
Just a thought, but it would be great if this could be implemented over the top of a C layer that operates on real AST nodes (rather than the PyObject representation of those nodes). I'm working on stuff to perform code optimization at the AST level (see the tlee-ast-optimize branch), and the functionality you're describing may wind up being very useful to me. I've got more to say on the topic, but I'm at work right now. Just something to keep in mind. Cheers, T Armin Ronacher wrote: Hi all, I would like to propose a new module for the stdlib for Python 2.6 and higher: "ast". The motivation for this module is the pending deprecation for compiler.ast which is widely used (debugging, template engines, code coverage etc.). _ast is a very solid module and is without a doubt easier to maintain then compiler.ast which was written in Python, it's lacking some features such as pretty printing the AST or traversing it. The idea of "ast" would be adding high level functionality for easier working with the AST. It currently provides these features: - pretty printing AST objects - a parse function as easier alias for compile() + flag - operator-node -> operator symbol mappings (eg: _ast.Add -> '+') - methods to modify lineno / col_offset (incrementing or copying the data over from existing nodes) - getting the fields of nodes as dicts - iterating over all child nodes - a function to get the docstring or an AST node - a node walker that yields all child-nodes recursively - a `NodeVistor` and `NodeTransformer` Additionally there is a `literate_eval` function in that module that can safely evaluate python literals in a string. Module and unittests are located in the Pocoo Sandbox HG repository: http://dev.pocoo.org/hg/sandbox/file/tip/ast/ast.py http://dev.pocoo.org/hg/sandbox/file/tip/ast/test_ast.py A slightly modified version of the ast.py module for Python 2.5 compatibility is currently in use for the Mako template engine to achieve support for Google's AppEngine. An example module for the NodeVisitor is in the repository which converts a Python AST back into Python source code: http://dev.pocoo.org/hg/sandbox/file/tip/ast/codegen.py Regards, Armin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Hi all, I've been working on optimization of the AST, including the porting of the old bytecode-level optimizations to the AST level. A few questions have come up in the process of doing this, all of which are probably appropriate for discussion on this list. The code I'm referring to here can be found in the tlee-ast-optimize branch. Most of the relevant code is in Python/optimize.c and Python/peephole.c. Nearly all of the bytecode-level optimizations have been moved to the AST optimizer with a few exceptions. Most of those waiting to be ported are stuck in limbo due to the fact we can't yet inject arbitrary PyObject constants into the AST. Examples are tuples of constants and the optimization of "LOAD_GLOBAL/LOAD_NAME None" as "LOAD_CONST None". This leaves us with a few options: 1. A new AST expr node for constant values for types other than Str/Num I imagine this to be something like Const(PyObject* v), which is effectively translated to a "LOAD_CONST v" by the compiler. This trades the purity of the AST for a little practicality. A "Const" node has no real source representation, it would exist solely for the purpose of injecting PyObject constants into the AST. 2. Allow arbitrary annotations to be applied to the AST as compiler hints. For example, each AST node might have an optional dict that contains a set of annotation values. Then when traversing the AST, the compiler might do something along the lines of: if (expr->annotations) { PyObject* constvalue = PyDict_GetItemString(expr->annotations, "constantvalue"); if (constvalue) ADDOP_O(c, LOAD_CONST, constvalue, consts) else VISIT(c, expr, expr) } This is a more general solution if we want to keep other compiler hints down the track, but unless somebody can think of another use case this is probably overkill. 3. Keep these particular optimizations at the bytecode level. It would be great to be able to perform the optimizations at a higher level, but this would require no effort at all. This would mean two passes over the same code at two different levels. If anybody familiar with this stuff could weigh in on the matter, it would be much appreciated. I've got a list of other issues that I need to discuss here, but this would be a great start. Thanks, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Martin v. Löwis wrote: This leaves us with a few options: 5. Reuse/Abuse Num(object) for arbitrary constants. AFAICT, this should work out of the box. Eek. It *does* seem like Num would work out of the box, but would this be a good idea? What about *replacing* Num with Const? Might make optimizations specifically for numeric values slightly hairier, and semantically I think they might be different enough to warrant separate AST nodes despite the similarity in implementation at the compiler level. FWIW, I read Num as "numeric literal" and Const as "arbitrary constant", but that's probably only because I've seen the immediate need for constants with non-Num values in the process of writing the AST optimizer. 1. A new AST expr node for constant values for types other than Str/Num I imagine this to be something like Const(PyObject* v), which is effectively translated to a "LOAD_CONST v" by the compiler. This trades the purity of the AST for a little practicality. A "Const" node has no real source representation, it would exist solely for the purpose of injecting PyObject constants into the AST. I think this is the way to go. It doesn't violate purity: it is an *abstract* syntax, meaning that there doesn't need to be a 1:1 relationship to source syntax. However, it is still possible to reproduce source code from this Const node. I'm leaning toward this, too. It's dirt simple and quite clean to implement. I also don't worry about Jython conflicts. The grammar has a version number precisely so that you can refer to a specific version if you need to. Any Jython folk care to weigh in on this? If there are no major objections I think I'm going to forge ahead with an independant Const() node. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] AST Optimization: Branch Elimination in Generator Functions
The next problem that cropped up during the implementation of the AST code optimizer is related to branch elimination and the elimination of any code after a return. Within a FunctionDef node, we would (ideally) like to blow away If nodes with a constant - but false - test expression. e.g.: def foo(): if False: # ... stuff ... For most functions, this will cause no problems and the code will behave as expected. However, if the eliminated branch contains a "yield" expression, the function is actually a generator function - even if the yield expression can never be reached: def foo(): if False: yield 5 In addition to this, the following should also be treated as a generator even though we'd like to be able to get rid of all the code following the "return" statement: def foo(): return yield 5 Again, blowing away the yield results in a normal function instead of a generator. Not what we want: we need to preserve the generator semantics. Upon revisiting this, it's actually made me reconsider the use of a Const node for the earlier problem relating to arbitrary constants. We may be better off with annotations after all ... then we could mark FunctionDef nodes as being generators at the AST level to force the compiler to produce code for a generator, but eliminate the branches anyway. The other alternative I can think of is injecting a yield node somewhere unreachable and ensuring it doesn't get optimized away, but this seems pretty hacky in comparison. Any other ideas? Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] AST Optimization: Branch Elimination in Generator Functions
The next problem that cropped up during the implementation of the AST code optimizer is related to branch elimination and the elimination of any code after a return. Within a FunctionDef node, we would (ideally) like to blow away If nodes with a constant - but false - test expression. e.g.: def foo(): if False: # ... stuff ... For most functions, this will cause no problems and the code will behave as expected. However, if the eliminated branch contains a "yield" expression, the function is actually a generator function - even if the yield expression can never be reached: def foo(): if False: yield 5 In addition to this, the following should also be treated as a generator even though we'd like to be able to get rid of all the code following the "return" statement: def foo(): return yield 5 Again, blowing away the yield results in a normal function instead of a generator. Not what we want: we need to preserve the generator semantics. Upon revisiting this, it's actually made me reconsider the use of a Const node for the earlier problem relating to arbitrary constants. We may be better off with annotations after all ... then we could mark FunctionDef nodes as being generators at the AST level to force the compiler to produce code for a generator, but eliminate the branches anyway. The other alternative I can think of is injecting a yield node somewhere unreachable and ensuring it doesn't get optimized away, but this seems pretty hacky in comparison. Any other ideas? Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Module Suggestion: ast
I'm in the process of writing C code for the purposes of traversing AST nodes in the AST optimization branch. This seems to be an ideal case for code generation based on the ASDL representation of the AST as we're currently doing for Python-ast.[ch]. I'm already considering this approach for some code I need to visit the C AST representation. We could likely write a similar generator for the PyObject representation of the AST, which might be useful for the proposed "ast" module. This would ensure that both implementations of the AST visitation code are always kept in step with the ASDL. The only real problem I can foresee with this is that the code in asdl_c.py is already getting pretty hairy. Adding to the mess is going to make it worse still. Maybe this will serve as a good opportunity to clean it up a little? Any objections? Cheers, T Paul Moore wrote: 2008/5/1 Georg Brandl <[EMAIL PROTECTED]>: Armin Ronacher schrieb: I would like to propose a new module for the stdlib for Python 2.6 and higher: "ast". If there are no further objections, I'll add this to PEP 361 so that the proposal doesn't get lost. Excuse my confusion over process, but if this is to go into 2.6, does that mean it needs to be ready before the first beta? Or is there a more relaxed schedule for the stdlib (and if so, what is the deadline for the stdlib)? The same question probably applies to the stdlib reorg... Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Nick Coghlan wrote: As Thomas mentions in a later message, making it possible to annotate nodes would permit Functions to be annotated as being a generator at the AST stage (currently it is left to the bytecode compiler's symtable generation pass to make that determination). Although I guess an alternative solution to that would be to have separate AST nodes for Functions and Generators as well... I've actually backtracked a little and gone back down the Const path again. I know this is the third time I've changed my mind, but it's primarily because annotations tend to get a little clunky (or my implementation was, at least). Using Const nodes feels a lot more natural inside the optimizer. I think it's going to stick, at least in the short term. Rather than separate FunctionDef and GeneratorDef nodes, I think a new bool attribute (is_generator?) on FunctionDef should do the job nicely. Further, I'm thinking we can move the "generator detection code" from the symtable into Python/ast.c so the generator/function information is available to the optimizer. This is made a little tricky by the absence of the contextual information available that is normally available when flagging generators in the symtable. When generating AST nodes for a suite, we know nothing about the parent node in which the suite resides. Still, it might be doable. If this winds up being ugly, we might need to fall back to the original plan of a separate pass over function bodies to detect yield expressions. I'll look into all this tomorrow night, along with any other crazy suggestions. For now I need to sleep a few hours. :) Thanks for the feedback, it's much appreciated. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Nick Coghlan wrote: There are a lot of micro-optimisations that are actually context independent, so moving them before the symtable pass should be quite feasible - e.g. replacing "return None" with "return", stripping dead code after a return statement, changing a "if not" statement into an "if" statement with the two suites reversed, changing "(1, 2, 3)" into a stored constant, folding "1 + 2" into the constant "3". I believe the goal is to see how many of the current bytecode optimisations can actually be brought forward to the AST generation stage, rather than waiting until after the bytecode symtable calculation and compilation passes. That's been the aim so far. It's been largely successful with the exception of a few edge cases (most notably the functions vs. generator stuff). The elimination of unreachable paths (whether they be things like "if 0: ..." or "return; ... more code ...") completely breaks generators since we might potentially be blowing away "yield" statements during the elimination process. The rest of the optimizations, as far as I can see, are much less scary. The current structure goes: tokenisation->AST construction->symtable construction->bytecode compilation->bytecode optimisation My understanding of what Thomas is trying to do is to make it look more like this: tokenisation->AST construction->AST optimisation->symtable construction->bytecode compilation That's exactly right. I made a quick and dirty attempt at moving the AST optimization step after the symtable generation on the train home last night to see if Jeremy's suggestion gives us anything. Now, it does make the detection of generators a little easier, but it really doesn't give us all that much else. I'm happy to post a patch so you can see what I mean, but for now I think visiting the FunctionDef subtree to check for Yield nodes will be fine. Again, I might be wrong (as I've often been throughout this process!) but let's see how it goes. Obviously a bit less efficient, but function bodies really shouldn't be all that deep anyway. I've got a good idea about how I'm going to go forward with this. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Adam Olsen wrote: On Thu, May 8, 2008 at 5:22 PM, Thomas Lee <[EMAIL PROTECTED]> wrote: Nick Coghlan wrote: There are a lot of micro-optimisations that are actually context independent, so moving them before the symtable pass should be quite feasible - e.g. replacing "return None" with "return", stripping dead code after a return statement, changing a "if not" statement into an "if" statement with the two suites reversed, changing "(1, 2, 3)" into a stored constant, folding "1 + 2" into the constant "3". I believe the goal is to see how many of the current bytecode optimisations can actually be brought forward to the AST generation stage, rather than waiting until after the bytecode symtable calculation and compilation passes. That's been the aim so far. It's been largely successful with the exception of a few edge cases (most notably the functions vs. generator stuff). The elimination of unreachable paths (whether they be things like "if 0: ..." or "return; ... more code ...") completely breaks generators since we might potentially be blowing away "yield" statements during the elimination process. Also breaks various sanity checks relating to the global statement. What sanity checks are these exactly? Is this related to the lnotab? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Optimization of Python ASTs: How should we deal with constant values?
Nick Coghlan wrote: Steve Holden wrote: While not strictly related to the global statement, perhaps Adam refers to the possibility of optimizing away code with an assignment which would make a name be recognized as local? If you're worried about "yield" disappearing you should also be worried about assignments disappearing, since that might cause names to be interpreted as globals. And once you start annotating functions as generators or not, and variable names as locals or cell variables or globals, you're starting to build up a substantial fraction of the information that is already collected during the symtable construction pass. Perhaps the initial attempt at this should just focus on identifying those operations which have the potential to alter the results of the symtable construction, and leave those to the bytecode optimisation step for the moment. Doing the symtable pass twice seems fairly undesirable, even if it does let us trim some dead code out of the AST. Sounds good. We can always come back to it. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] availability of httplib.HTTPResponse.close
I was debating whether this was truly a question for python-dev or if I should take it to one of the user lists. Ultimately it feels like a question about the implementation of a core module, so hopefully nobody minds me posting it here. :) Although not listed as a public API method in the documentation, it would seem the httplib.HTTPResponse.close method might be useful in the event we don't want to actually read any of the data being sent back from the server. If I'm to assume that an undocumented method is considered "private", currently the only way to force the underlying file-like socket wrapper to close is by calling read() until no more data remains (as per the documentation). What's the reasoning behind requiring callers to read() all the pending data vs. just closing the socket? Is the close method really off limits to code using HTTPResponse? Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Tuple pack/unpack and the definition of AST Assign nodes
In porting one of the old peephole optimizations to the new AST compiler I noticed something weird going on with the following code: a, b, c = 1, 2, 3 Now, as you would expect this gets parsed into an Assign node. That Assign node looks like the following: Assign.targets = [Tuple(Name(a), Name(b), Name(c))] Assign.value = Tuple(1, 2, 3) What's weird here is that Assign.targets is an asdl_seq ... why are we wrapping the names in a Tuple() node? Shouldn't it look something more like this: Assign.targets = [Name(a), Name(b), Name(c)] I understand that parsing the testlist might yield a tuple and it was thus easier to just use the tuple rather than unpack it into an asdl_seq ... but if this was the intention, then why is Assign.targets an expr* rather than a plain old expr? Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Tuple pack/unpack and the definition of AST Assign nodes
Nick Coghlan wrote: I haven't looked at that code recently, but I believe the ADSL sequence in the assignment node is for statements where there are actually multiple assignment targets, such as: >>> p = x, y = 1, 2 >>> p, x, y ((1, 2), 1, 2) Cheers, Nick. Ah I see. A quick test verifies exactly this: >>> import _ast >>> ast = compile("p = x, y = 1, 2", "", "exec", _ast.PyCF_ONLY_AST) >>> ast.body[0] <_ast.Assign object at 0xb7d0122c> >>> ast.body[0].targets [<_ast.Name object at 0xb7d0124c>, <_ast.Tuple object at 0xb7d0128c>] >>> ast.body[0].value <_ast.Tuple object at 0xb7d0132c> >>> I thought this would have been implemented as nested Assign nodes, but I'm obviously wrong. :) Thanks for the clarification. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Assignment to None
Tony Nelson wrote: At 4:46 PM +0100 6/9/08, Michael Foord wrote: Alex Martelli wrote: The problem is more general: what if a member (of some external object we're proxying one way or another) is named print (in Python < 3), or class, or...? To allow foo.print or bar.class would require pretty big changes to Python's parser -- I have vague memories that the issue was discussed ages ago (possibly in conjunction with some early release of Jython) but never went anywhere much (including proposals to automatically append an underscore to such IDs in the proxying layer, etc etc). Maybe None in particular is enough of a special case (if it just happens to be hugely often used in dotNET libraries)? 'None' as a member does occur particularly frequently in the .NET world. A halfway house might be to state (something like): Python as a language disallows you from having names the same as keywords or 'None'. An implementation restriction specific to CPython is that the same restriction also applies to member names. Alternative implementations are free to not implement this restriction, with the caveat that code using reserved member names directly will be invalid syntax for CPython. ... Or perhaps CPython should just stop trying to detect this at compile time. Note that while assignment to ".None" is not allowed, setattr(foo, "None", 1) then referencing ".None" is allowed. I'm +0 on this at the moment, but I can understand the desire for it. Maybe we should stop trying to check for this assignment on attributes? Currently there are separate checks for assignment to None: one for the "foo.None = ..." form, another for the "None = ..." form. Removing the check for the former looks like it would be a one-liner. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Assignment to None
Martin v. Löwis wrote: The question is, what is the specification for Python. Now, that's a more interesting question than the question originally asked (which I interpreted as "why does it work the way it works"). The only indication in the specification of that feature I could find was: http://docs.python.org/dev/library/constants.html "Changed in version 2.4: Assignments to None are illegal and raise a SyntaxError." Now, given that this talks about the built-in namespace, this *doesn't* specify that foo.None=1 should also raise a syntax error. So the implementation apparently deviates from the specification. In Python 3, None, True, and False are keywords, so clearly, the intended semantics is also the implemented one (and the language description for 2.x needs to be updated/clarified). Interestingly enough, the semantics of True, False and None are different from one another in 2.6: True = "blah" and False = 6 are perfectly legal in Python <=2.6. Funny, I just ran into this. I was trying to figure out why the AST optimization code was breaking test_xmlrpc ... turns out xmlrpclib defines xmlrpclib.True and xmlrpclib.False and the optimizer was trying to resolve them as constants while compiling the module. Ouch. What happened in 3k? Were the constants in xmlrpclib renamed/removed? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] xmlrpclib.{True, False} (was Re: Assignment to None)
My work on the AST optimizer has led me down the path of attempting to replace things like Name("True") with Const(Py_True) nodes. This works fine most of the time, with the exception of the xmlrpclib module, where True and False are actually redefined: True, False = True, False As I stated in an earlier email, the optimizer tries to replace the tuple of Name nodes on the LHS with Py_True and Py_False respectively, which has the net effect of removing xmlrpclib.{True, False}. Obviously undesirable. The simplest options I can think of to remedy this: 1. A setattr hack: setattr(__import__(__name__), "True", True) 2. Remove all optimization of Name("True") and Name("False") 3. Skip AST optimization entirely for the LHS of Assignment nodes (effectively removing any optimization of the "targets" tuple) I'm leaning towards #3 at the moment as it seems like it's going to be the cleanest approach and makes a lot of sense -- at least on the surface. Can anybody think of problems with this approach? Cheers, T Thomas Lee wrote: Martin v. Löwis wrote: The question is, what is the specification for Python. Now, that's a more interesting question than the question originally asked (which I interpreted as "why does it work the way it works"). The only indication in the specification of that feature I could find was: http://docs.python.org/dev/library/constants.html "Changed in version 2.4: Assignments to None are illegal and raise a SyntaxError." Now, given that this talks about the built-in namespace, this *doesn't* specify that foo.None=1 should also raise a syntax error. So the implementation apparently deviates from the specification. In Python 3, None, True, and False are keywords, so clearly, the intended semantics is also the implemented one (and the language description for 2.x needs to be updated/clarified). Interestingly enough, the semantics of True, False and None are different from one another in 2.6: True = "blah" and False = 6 are perfectly legal in Python <=2.6. Funny, I just ran into this. I was trying to figure out why the AST optimization code was breaking test_xmlrpc ... turns out xmlrpclib defines xmlrpclib.True and xmlrpclib.False and the optimizer was trying to resolve them as constants while compiling the module. Ouch. What happened in 3k? Were the constants in xmlrpclib renamed/removed? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] xmlrpclib.{True, False} (was Re: Assignment to None)
Option 4 just struck me: only optimize Name nodes if they have a Load ctx. This makes even more sense: in a Store context, we almost invariably want the name rather than the constant. Cheers, T Thomas Lee wrote: My work on the AST optimizer has led me down the path of attempting to replace things like Name("True") with Const(Py_True) nodes. This works fine most of the time, with the exception of the xmlrpclib module, where True and False are actually redefined: True, False = True, False As I stated in an earlier email, the optimizer tries to replace the tuple of Name nodes on the LHS with Py_True and Py_False respectively, which has the net effect of removing xmlrpclib.{True, False}. Obviously undesirable. The simplest options I can think of to remedy this: 1. A setattr hack: setattr(__import__(__name__), "True", True) 2. Remove all optimization of Name("True") and Name("False") 3. Skip AST optimization entirely for the LHS of Assignment nodes (effectively removing any optimization of the "targets" tuple) I'm leaning towards #3 at the moment as it seems like it's going to be the cleanest approach and makes a lot of sense -- at least on the surface. Can anybody think of problems with this approach? Cheers, T Thomas Lee wrote: Martin v. Löwis wrote: The question is, what is the specification for Python. Now, that's a more interesting question than the question originally asked (which I interpreted as "why does it work the way it works"). The only indication in the specification of that feature I could find was: http://docs.python.org/dev/library/constants.html "Changed in version 2.4: Assignments to None are illegal and raise a SyntaxError." Now, given that this talks about the built-in namespace, this *doesn't* specify that foo.None=1 should also raise a syntax error. So the implementation apparently deviates from the specification. In Python 3, None, True, and False are keywords, so clearly, the intended semantics is also the implemented one (and the language description for 2.x needs to be updated/clarified). Interestingly enough, the semantics of True, False and None are different from one another in 2.6: True = "blah" and False = 6 are perfectly legal in Python <=2.6. Funny, I just ran into this. I was trying to figure out why the AST optimization code was breaking test_xmlrpc ... turns out xmlrpclib defines xmlrpclib.True and xmlrpclib.False and the optimizer was trying to resolve them as constants while compiling the module. Ouch. What happened in 3k? Were the constants in xmlrpclib renamed/removed? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] xmlrpclib.{True, False} (was Re: Assignment to None)
Benjamin Peterson wrote: On Sun, Jun 15, 2008 at 8:11 AM, Thomas Lee <[EMAIL PROTECTED]> wrote: The simplest options I can think of to remedy this: 1. A setattr hack: setattr(__import__(__name__), "True", True) 2. Remove all optimization of Name("True") and Name("False") 3. Skip AST optimization entirely for the LHS of Assignment nodes (effectively removing any optimization of the "targets" tuple) You're working on optimization for the 2.6 branch, correct? In that case, why don't we take option 3 in 2.x and just reenable it in 3.x where it's completely forbidden to assign to True or False? Sorry, that's correct. This is against 2.6 trunk. That's the idea -- in 3.x this will be a non-issue. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] xmlrpclib.{True, False} (was Re: Assignment to None)
Georg Brandl wrote: Remember that it must still be possible to write (in 2.6) True = 0 assert not True Ah of course. Looks like I should just avoid optimizations of Name("True") and Name("False") all together. That's a shame! Cheers, T Georg Thomas Lee schrieb: Option 4 just struck me: only optimize Name nodes if they have a Load ctx. This makes even more sense: in a Store context, we almost invariably want the name rather than the constant. Cheers, T Thomas Lee wrote: My work on the AST optimizer has led me down the path of attempting to replace things like Name("True") with Const(Py_True) nodes. This works fine most of the time, with the exception of the xmlrpclib module, where True and False are actually redefined: True, False = True, False As I stated in an earlier email, the optimizer tries to replace the tuple of Name nodes on the LHS with Py_True and Py_False respectively, which has the net effect of removing xmlrpclib.{True, False}. Obviously undesirable. The simplest options I can think of to remedy this: 1. A setattr hack: setattr(__import__(__name__), "True", True) 2. Remove all optimization of Name("True") and Name("False") 3. Skip AST optimization entirely for the LHS of Assignment nodes (effectively removing any optimization of the "targets" tuple) I'm leaning towards #3 at the moment as it seems like it's going to be the cleanest approach and makes a lot of sense -- at least on the surface. Can anybody think of problems with this approach? Cheers, T Thomas Lee wrote: Martin v. Löwis wrote: The question is, what is the specification for Python. Now, that's a more interesting question than the question originally asked (which I interpreted as "why does it work the way it works"). The only indication in the specification of that feature I could find was: http://docs.python.org/dev/library/constants.html "Changed in version 2.4: Assignments to None are illegal and raise a SyntaxError." Now, given that this talks about the built-in namespace, this *doesn't* specify that foo.None=1 should also raise a syntax error. So the implementation apparently deviates from the specification. In Python 3, None, True, and False are keywords, so clearly, the intended semantics is also the implemented one (and the language description for 2.x needs to be updated/clarified). Interestingly enough, the semantics of True, False and None are different from one another in 2.6: True = "blah" and False = 6 are perfectly legal in Python <=2.6. Funny, I just ran into this. I was trying to figure out why the AST optimization code was breaking test_xmlrpc ... turns out xmlrpclib defines xmlrpclib.True and xmlrpclib.False and the optimizer was trying to resolve them as constants while compiling the module. Ouch. What happened in 3k? Were the constants in xmlrpclib renamed/removed? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] xmlrpclib.{True, False} (was Re: Assignment to None)
Georg Brandl wrote: We can of course decide to make assignment to True and False illegal in 2.7 :) Georg Great to know that's an option. There's little-to-no chance of this making 2.6. I might just avoid trying to treat True/False as "real" constants until there's been a proper discussion about changing these semantics -- just to get test_xmlrpc passing. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python VM
Martin v. Löwis wrote: Jakob, This looks fairly correct. A few comments below. Control Flow The calling sequence is: main() (in python.c) -> Py_Main() (main.c) -> PyRun_FooFlags() (pythonrun.c) -> run_bar() (pythonrun.c) -> PyEval_EvalCode() (ceval.c) -> PyEval_EvalCodeEx() (ceval.c) -> PyEval_EvalFrameEx() (ceval.c). What this misses is the compiler stuff, i.e. PyParser_ASTFromFoo and PyAST_Compile, which precedes the call to PyEval_ (atleast, no byte code file is available). Further, if I have my way with the AST optimization code, the symtable construction will be an explicit step in between these >:) In any case, this is awesome work Jakob. It'd be great for this stuff to be documented in such detail -- I sure wish I had something like this to go by when I first started hacking on the source -- but the details seem to change quite often. Still, seeing the detail distilled once in a while is sort of nice, and great for anybody looking to get their teeth into the code. Thanks for doing the hard yards. :) Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] lnotab and the AST optimizer
I'm making some good progress with the AST optimizer, and now the main thing standing in my way is lnotab. Currently lnotab expects bytecode sequencing to be roughly in-sync with the order of the source file and a few things that the optimizer does (e.g. swapping the bodies of an if/else after removing negation such that "if not X: A; else: B" becomes "if X: B; else A") breaks this assumption. This will result in either an assertion failure or incorrect line numbers being reported. It seems that lnotab is used in relatively few places in the source code at the moment, but if I'm going to make a change to how lnotab works I want to do so in a way that's going to allow me to move forward while keeping everybody happy. I'm away for a few days so I probably won't be able to get back to anybody until either Sunday or Monday, but I'd appreciate it if anybody in the know can weigh in on this. Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] lnotab and the AST optimizer
Antoine Pitrou wrote: Hi, Hi. Thanks for getting back to me so quickly. I can even respond before I have to drag myself off to bed. :) I'm making some good progress with the AST optimizer, and now the main thing standing in my way is lnotab. Currently lnotab expects bytecode sequencing to be roughly in-sync with the order of the source file and a few things that the optimizer does (e.g. swapping the bodies of an if/else after removing negation such that "if not X: A; else: B" becomes "if X: B; else A") breaks this assumption. This will result in either an assertion failure or incorrect line numbers being reported. In http://bugs.python.org/issue2459 ("speedup for / while / if with better bytecode") I had the same problem and decided to change the lnotab format so that line number increments are signed bytes rather than unsigned. The proposed patch contains many other changes, but with a bit of perseverance you may be able to extract the lnotab-related modifications... ;) This modification will allow many more types of control flow transformations than the current scheme does. Great, thanks! I'll check it out next week. By the way: swapping the bodies of an if/else after removing negation such that "if not X: A; else: B" becomes "if X: B; else A") Is this really an optimization? "if" and "if not" should use the same number of opcodes (the former produces JUMP_IF_FALSE and the latter JUMP_IF_TRUE). Not quite. :) Even if we were producing a JUMP_IF_FALSE, it'd still be nice to optimize away the UNARY_NOT in the former. In practice, both actually produce a JUMP_IF_TRUE due to an existing optimization in the peephole optimizer which does just that. I'm trying to emulate this at the AST level because I'm part of a secret, evil conspiracy to be rid of the peephole optimizer. Shh. The relevant code in the peepholer, plus comment: /* Replace UNARY_NOT JUMP_IF_FALSE POP_TOP with withJUMP_IF_TRUE POP_TOP */ case UNARY_NOT: if (codestr[i+1] != JUMP_IF_FALSE || codestr[i+4] != POP_TOP || !ISBASICBLOCK(blocks,i,5)) continue; tgt = GETJUMPTGT(codestr, (i+1)); if (codestr[tgt] != POP_TOP) continue; j = GETARG(codestr, i+1) + 1; codestr[i] = JUMP_IF_TRUE; SETARG(codestr, i, j); codestr[i+3] = POP_TOP; codestr[i+4] = NOP; break; A little hackage with the dis module seems to confirm this is the case. Of course, if you know of a situation where this optimization doesn't apply and we actually wind up with a JUMP_IF_FALSE for an if/else post-peephole, I'm all ears. Thanks again! Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] lnotab and the AST optimizer
Antoine Pitrou wrote: Antoine Pitrou pitrou.net> writes: In http://bugs.python.org/issue2459 ("speedup for / while / if with better bytecode") I had the same problem and decided to change the lnotab format so that line number increments are signed bytes rather than unsigned. By the way, the same change could be done for relative jump offsets in the bytecode (change them from unsigned shorts to signed shorts). Taken together, both modifications would release a lot of constraints on the ordering of code blocks. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com By the way, you were right about JUMP_IF_TRUE/JUMP_IF_FALSE. It's far too late. Apologies. I'm still pretty sure this is the peepholer's doing, though, and if that's the case then I want to try and deal with it at the AST level. Which is what's being achieved with the AST optimization I originally proposed, right? Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] lnotab and the AST optimizer
Antoine Pitrou wrote: Thomas Lee vector-seven.com> writes: By the way, you were right about JUMP_IF_TRUE/JUMP_IF_FALSE. It's far too late. Apologies. I'm still pretty sure this is the peepholer's doing, Yes indeed. Which is what's being achieved with the AST optimization I originally proposed, right? Well, not exactly, your optimization eliminates the UNARY_NOT by swapping the if/else blocks, while the peepholer eliminates the UNARY_NOT by fusing it with the subsequent jump opcode. In this case it doesn't make much of a difference, but if there is only an "if" without an "else", the peepholer's optimization is still possible while yours is not. Unless a pass is injected into the if body, which will generate no additional bytecode and still have the same net effect. (bottom line: the peepholer is not dead!) We'll see ;) Thanks for all your help, I'm looking forward to getting my hands on that patch. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] when is path==NULL?
Ulrich Eckhardt wrote: Hi! I'm looking at trunk/Python/sysmodule.c, function PySys_SetArgv(). In that function, there is code like this: PyObject* path = PySys_GetObject("path"); ... if (path != NULL) { ... } My intuition says that if path==NULL, something is very wrong. At least I would expect to get 'None', but never NULL, except when out of memory. So, for the case that path==NULL', I would simply invoke Py_FatalError("no mem for sys.path"), similarly to the other call there. Sounds reasonable? Uli Maybe it's just being safe? From Python/sysmodule.c: PyThreadState *tstate = PyThreadState_GET(); PyObject *sd = tstate->interp->sysdict; if (sd == NULL) return NULL; return PyDict_GetItemString(sd, name); So if tstate->interp->sysdict is NULL, we return NULL. That's probably a bit unlikely. However, PyDict_GetItemString attempts to allocate a new PyString from the given char* key. If that fails, PySys_GetObject will also return NULL -- just like most functions in the code base that hit an out of memory error: PyObject * PyDict_GetItemString(PyObject *v, const char *key) { PyObject *kv, *rv; kv = PyString_FromString(key); if (kv == NULL) return NULL; rv = PyDict_GetItem(v, kv); Py_DECREF(kv); return rv; } Seems perfectly reasonable for it to return NULL in this situation. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] when is path==NULL?
Ulrich Eckhardt wrote: Hi! I'm looking at trunk/Python/sysmodule.c, function PySys_SetArgv(). In that function, there is code like this: PyObject* path = PySys_GetObject("path"); ... if (path != NULL) { ... } My intuition says that if path==NULL, something is very wrong. At least I would expect to get 'None', but never NULL, except when out of memory. So, for the case that path==NULL', I would simply invoke Py_FatalError("no mem for sys.path"), similarly to the other call there. Sounds reasonable? Uli I also meant to mention that there might be a reason why we want the out of memory error to bubble up to the caller should that happen while attempting to allocate the PyString in PyDict_GetItemString, rather than just bailing out with a generic FatalError. Cheers, T ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Move encoding_decl to the top of Grammar/Grammar?
Hi all, Currently, Parser/parsetok.c has a dependency on graminit.h. This can cause headaches when rebuilding after adding new syntax to Grammar/Grammar because parsetok.c is part of pgen, which is responsible for *generating* graminit.h. This circular dependency can result in parsetok.c using a different value for encoding_decl to what is used in ast.c, which causes PyAST_FromNode to fall over at runtime. It effectively looks something like this: * Grammar/Grammar is modified * build begins -- pgen compiles, parsetok.c uses encoding_decl=X * graminit.h is rebuilt with encoding_decl=Y * ast.c is compiled using encoding_decl=Y * when python runs, parsetok() emits encoding_decl nodes that PyAST_FromNode can't recognize: SystemError: invalid node XXX for PyAST_FromNode A nice, easy short term solution that doesn't require unwinding this dependency would be to simply move encoding_decl to the top of Grammar/Grammar and add a big warning noting that it needs to come before everything else. This will help to ensure its value never changes when syntax is added/removed. I'm happy to provide a patch for this (including some additional dependency info for files dependent upon graminit.h and Python-ast.h), but was wondering if there were any opinions about how this should be resolved. Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move encoding_decl to the top of Grammar/Grammar?
Here's the corresponding tracker issue: http://bugs.python.org/issue4347 I've uploaded a patch there anyway, since I'm going to need this stuff working for a presentation I'm giving tomorrow. Cheers, T Thomas Lee wrote: Hi all, Currently, Parser/parsetok.c has a dependency on graminit.h. This can cause headaches when rebuilding after adding new syntax to Grammar/Grammar because parsetok.c is part of pgen, which is responsible for *generating* graminit.h. This circular dependency can result in parsetok.c using a different value for encoding_decl to what is used in ast.c, which causes PyAST_FromNode to fall over at runtime. It effectively looks something like this: * Grammar/Grammar is modified * build begins -- pgen compiles, parsetok.c uses encoding_decl=X * graminit.h is rebuilt with encoding_decl=Y * ast.c is compiled using encoding_decl=Y * when python runs, parsetok() emits encoding_decl nodes that PyAST_FromNode can't recognize: SystemError: invalid node XXX for PyAST_FromNode A nice, easy short term solution that doesn't require unwinding this dependency would be to simply move encoding_decl to the top of Grammar/Grammar and add a big warning noting that it needs to come before everything else. This will help to ensure its value never changes when syntax is added/removed. I'm happy to provide a patch for this (including some additional dependency info for files dependent upon graminit.h and Python-ast.h), but was wondering if there were any opinions about how this should be resolved. Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tom%40vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] New Future Keywords
Hi, Just a quick question: how can I add new future keywords to Python? I need to add a new (Python) keyword to the language, but there seems to be a few different source files that I need to modify. Just want to make sure I'm doing it the right way before I go unleashing a nasty broken patch into the world :) Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switch statement
On Sat, Jun 10, 2006 at 05:53:14PM -0500, [EMAIL PROTECTED] wrote: > * Aside from the modified Grammar file there is no documentation. > * There are no test cases. > * Can you submit a patch on SourceForge? All have been addressed, although I'm not sure if I've covered everywhere I need to update for the documentation: http://sourceforge.net/tracker/index.php?func=detail&aid=1504199&group_id=5470&atid=305470 Thanks again for your feedback! Cheers, Tom -- Tom Lee http://www.vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switch statement
On Mon, Jun 12, 2006 at 11:33:49PM +0200, Michael Walter wrote: > Maybe "switch" became a keyword with the patch.. > > Regards, > Michael > That's correct. > On 6/12/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote: > > > > Could you upload your patch to SourceForge ? Then I could add > > it to the PEP. > > It's already up there :) I thought I sent that through in another e-mail, but maybe not: http://sourceforge.net/tracker/index.php?func=detail&aid=1504199&group_id=5470&atid=305470 Complete with documentation changes and a unit test. > > Thomas wrote a patch which implemented the switch statement > > using an opcode. The reason was probably that switch works > > a lot like e.g. the for-loop which also opens a new block. > > No, Skip explained this in an earlier e-mail: apparently some programming languages use a compile-time generated lookup table for switch statements rather than COMPARE_OP for each case. The restriction is, of course, that you're stuck with constants for each case statement. In a programming language like Python, where there are no named constants, the usefulness of such a construct might be questioned. Again, see Skip's earlier e-mails. > > Could you explain how your patch works ? > > 1. Evaluate the "switch" expression so that it's at the top of the stack 2. For each case clause: 2.1. Generate a DUP_TOP to duplicate the switch value for a comparison 2.2. Evaluate the "case" expression 2.3. COMPARE_OP(PyCmp_EQ) 2.4. Jump to the next case statement if false 2.5. Otherwise, POP_TOP and execute the suite for the case clause 2.6. Then jump to 3 3. POP_TOP to remove the evaluated switch expression from the stack As you can see from the above, my patch generates a COMPARE_OP for each case, so you can use expressions - not just constants - for cases. All of this is in the code found in Python/compile.c. Cheers, Tom -- Tom Lee http://www.vector-seven.com - End forwarded message - -- Tom Lee http://www.vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switch statement
On Sat, Jun 10, 2006 at 05:53:14PM -0500, [EMAIL PROTECTED] wrote: > > Thomas> As the subject of this e-mail says, the attached patch adds a > Thomas> "switch" statement to the Python language. > > Thanks for the contribution. I patched my sandbox and it built just fine. > I'm going out of town for a couple weeks, so I'll point out what everyone > else is thinking then duck out of the way: > > * Aside from the modified Grammar file there is no documentation. > * There are no test cases. > * Can you submit a patch on SourceForge? > You're right, of course. I'll sort the documentation and test cases out as soon as I get a chance. > You mentioned: > > Thomas> I got a bit lost as to why the SWITCH opcode is necessary for > Thomas> the implementation of the PEP. The reasoning seems to be > Thomas> improving performance, but I'm not sure how a new opcode could > Thomas> improve performance. > > Your implementation is straightforward, but uses a series of DUP_TOP and > COMPARE_OP instructions to compare each alternative expression to the > initial expression. In many other languages the expression associated with > the case would be restricted to be a constant expression so that at compile > time a jump table or dictionary lookup could be used to jump straight to the > desired case. > I see. But restricting the switch to constants in the name of performance may not make sense in a language like Python. Maybe this is something for the PEP to discuss, but it seems such an implementation would be confusing and sometimes it may not be possible to use a switch case in place of if/elif/else statements at all. Consider the following: #!/usr/bin/python FAUX_CONST_A = 'a' FAUX_CONST_B = 'b' some_value = 'a' switch some_value: case FAUX_CONST_A: print 'got a' case FAUX_CONST_B: print 'got b' else: print ':(' # EOF Although, conceptually, FAUX_CONST_A and FAUX_CONST_B are constants, a 'constants only' implementation would likely give a syntax error (see expr_constant in Python/compile.c). IMHO, this will lead to one of two things: a) unnecessarily duplication of constant values for the purpose of using them as case values b) reverting back to if/elif/else I do get the distinction, I'm just wondering if the usefulness of the semantics (or lack thereof) are going to negate any potential performance enhancements: if a switch statement is never used because it's only useful in a narrow set of circumstances, then maybe we're looking to improve performance in the wrong place? Just thinking about it, maybe there could be two different code paths for switch statements: one when all the case values are constants (the 'fast' one) and one where one or more are expressions. This would mean a slightly longer compile time for switch statements while ensuring that runtime execution is the maximum possible without placing any major restrictions on what can be used as a case value. Cheers, Tom -- Tom Lee http://www.vector-seven.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] zipfile.ZipFile('foo.zip', 'a'): file not found -> create?
Hi all, In reference to: http://sourceforge.net/tracker/index.php?func=detail&aid=1514451&group_id=5470&atid=105470 I wrote a patch for this "bug", but a valid point was raised by Ronald Oussorren: this borders on being more of a "feature" than a bug fix, although - IMHO - this fix improves consistency with the rest of the Python standard library. Can I get some opinions on this? My patch for this issue currently lives here: http://sourceforge.net/tracker/index.php?func=detail&aid=1517891&group_id=5470&atid=305470 Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Implementation of PEP 341
Hi all, I've been using Python for a few years and, as of a few days ago, finally decided to put the effort into contributing code back to the project. I'm attempting to implement PEP 341 (unification of try/except and try/finally) against HEAD. However, this being my first attempt at a change to the syntax there's been a bit of a learning curve. I've modified Grammar/Grammer to use the new try_stmt grammar, updated Parser/Python.asdl to accept a stmt* finalbody for TryExcept instances and modified Python/ast.c to handle the changes to Python.asdl - generating an AST for the finalbody. All that remains as far as I can see is to modify Python/compile.c to generate the necessary code and update Modules/parsermodule.c to accommodate the changes to the grammar. (If anybody has further input as to what needs to be done here, I'm all ears!) The difficulty I'm having is in Python/compile.c: currently there are two functions which generate the code for the two existing try_stmt paths. compiler_try_finally doesn't need any changes as far as I can see. compiler_try_except, however, now needs to generate code to handle TryExcept.finalbody (which I added to Parser/Python.asdl). This sounds easy enough, but the following is causing me difficulty: /* BEGIN */ ADDOP_JREL(c, SETUP_EXCEPT, except); compiler_use_next_block(c, body); if (!compiler_push_fblock(c, EXCEPT, body)) return 0; VISIT_SEQ(c, stmt, s->v.TryExcept.body); ADDOP(c, POP_BLOCK); compiler_pop_fblock(c, EXCEPT, body); /* END */ A couple of things confuse me here: 1. What's the purpose of the push_fblock/pop_fblock calls? 2. Do I need to add "ADDOP_JREL(c, SETUP_FINALLY, end);" before/after SETUP_EXCEPT? Or will this conflict with the SETUP_EXCEPT op? I don't know enough about the internals of SETUP_EXCEPT/SETUP_FINALLY to know what to do here. Also, in compiler_try_finally we see this code: /* BEGIN */ ADDOP_JREL(c, SETUP_FINALLY, end); compiler_use_next_block(c, body); if (!compiler_push_fblock(c, FINALLY_TRY, body)) return 0; VISIT_SEQ(c, stmt, s->v.TryFinally.body); ADDOP(c, POP_BLOCK); compiler_pop_fblock(c, FINALLY_TRY, body); ADDOP_O(c, LOAD_CONST, Py_None, consts); /* END */ Why the LOAD_CONST Py_None? Does this serve any purpose? some sort of weird pseudo return value? Or does it have a semantic purpose that I'll have to reproduce in compiler_try_except? Cheers, and thanks for any help you can provide :) Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 341 patch & memory management (was: Memory management in the AST parser & compiler)
Interesting trick! The PEP 341 patch is now using Marek's 'do ... while' resource cleanup trick instead of the nasty goto voodoo. I've also fixed the last remaining bug that Neal pointed out. I'm running the unit tests right now, shall have the updated (and hopefully final) PEP 341 patch up on sourceforge within the next 15 minutes. If anybody has feedback/suggestions for the patch, please let me know. I'm new to this stuff, so I'm still finding my way around :) Cheers, Tom Nick Coghlan wrote: >Marek Baczek Baczyński wrote: > > >>2005/11/15, Nick Coghlan <[EMAIL PROTECTED]>: >> >> >>>It avoids the potential for labelling problems that arises when goto's are >>>used for resource cleanup. It's a far cry from real exception handling, but >>>it's the best solution I've seen within the limits of C. >>> >>> >> >>do { >> >> >>} while (0); >> >> >>Same benefit and saves some typing :) >> >> > >Heh. Good point. I spend so much time working with a certain language I tend >to forget do/while loops exist ;) > >Cheers, >Nick. > > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
As the writer of the crappy code that sparked this conversation, I feel I should say something :) Brett Cannon wrote: >On 11/15/05, Neal Norwitz <[EMAIL PROTECTED]> wrote: > > >>On 11/15/05, Jeremy Hylton <[EMAIL PROTECTED]> wrote: >> >> >>>Thanks for the message. I was going to suggest the same thing. I >>>think it's primarily a question of how to add an arena layer. The AST >>>phase has a mixture of malloc/free and Python object allocation. It >>>should be straightforward to change the malloc/free code to use an >>>arena API. We'd probably need a separate mechanism to associate a set >>>of PyObject* with the arena and have those DECREFed. >>> >>> >>Well good. It seems we all agree there is a problem and on the >>general solution. I haven't thought about Brett's idea to see if it >>could work or not. It would be great if we had someone start working >>to improve the situation. It could well be that we live with the >>current code for 2.5, but it would be great to use arenas for 2.6 at >>least. >> >> >> > > I have been thinking about this some more to put off doing homework >and I have some random ideas I just wanted to toss out there to make >sure I am not thinking about arena memory management incorrectly >(never actually encountered it directly before). > >I think an arena API is going to be the best solution. Pulling >trickery with redefining Py_INCREF and such like I suggested seems >like a pain and possibly error-prone. With the compiler being a >specific corner of the core having a special API for handling the >memory for PyObject* stuff seems reasonable. > > > I agree. And it raises the learning curve for poor saps like myself. :) >We might need PyArena_Malloc() and PyArena_New() to handle malloc() >and PyObject* creation. We could then have a struct that just stored >pointers to the allocated memory (linked list for each pointer which >gives high memory overhead or linked list of arrays that should lower >memory but make having possible holes in the array for stuff already >freed a pain to handle). We would then have PyArena_FreeAll() that >would be strategically placed in the code for when bad things happen >that would just traverse the lists and free everything. I assume >having a way to free individual items might be useful. Could have the >PyArena_New() and _Malloc() return structs with the needed info for a >PyArena_Free(location_struct) to be able to fee the specific item >without triggering a complete freeing of all memory. But this usage >should be discouraged and only used when proper memory management is >guaranteed. > > > An arena/pool (as I understood it from my quick skim) for the AST would probably best be implemented (IMHO) as an ADT based on a linked-list: typedef struct _ast_pool_node { struct _ast_pool_node *next; PyObject *object; /* == NULL when data != NULL */ void *data; /* == NULL when object != NULL */ }ast_pool_node; deallocating a node could then be as simple as: /* ast_pool_node *n */ PyObject_Free(n->object); if (n->data != NULL) free(n->data); /* save n->next */ free(n); /* then go on to free n->next */ I haven't really thought all that deeply about this, so somebody shoot me down if I'm completely off-base (Neal? :D). Every allocation of a seq/stmt within ast.c would have its memory saved to the pool within the function it's allocated in. Then before we return, we can just deallocate the pool/arena/whatever you want to call it. The problem with this is that should we get to the end of the function and everything actually went okay (i.e. we return non-NULL), we then have to run through and deallocate all the nodes anyway (without deallocating n->object or n->data). Bah. Maybe we *would* be better off with a monolithic cleanup. I don't know. >Boy am I wanting RAII from C++ for automatic freeing when scope is >left. Maybe we need to come up with a similar thing, like all memory >that should be freed once a scope is left must use some special struct >that stores references to all created memory locally and then a free >call must be made at all exit points in the function using the special >struct. Otherwise the pointer is stored in the arena and handled >en-mass later. > > > Which is basically what I just rambled on about up above, I think :) >Hopefully this is all made some sense. =) Is this the basic strategy >that an arena setup would need? if not can someone enlighten me? > > >-Brett >___ >Python-Dev mailing list >Python-Dev@python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com > > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
Niko Matsakis wrote: >>Boy am I wanting RAII from C++ for automatic freeing when scope is >>left. Maybe we need to come up with a similar thing, like all memory >>that should be freed once a scope is left must use some special struct >>that stores references to all created memory locally and then a free >>call must be made at all exit points in the function using the special >>struct. Otherwise the pointer is stored in the arena and handled >>en-mass later. >> >> > >That made sense. I think I'd be opposed to what you describe here >just because I think anything which *requires* that cleanup code be >placed on every function is error prone. > > > Placing it in every function isn't really the problem: at the moment it's more the fact we have to keep track of too many variables at any given time to properly deallocate it all. Cleanup code gets tricky very fast. Then it gets further complicated by the fact that stmt_ty/expr_ty/mod_ty/etc. deallocate members (usually asdl_seq instances in my experience) - so if a construction takes place, all of a sudden you have to make sure you don't deallocate those members a second time in the cleanup code :S it gets tricky very quickly. Even if it meant we had just one function call - one, safe function call that deallocated all the memory allocated within a function - that we had to put before each and every return, that's better than what we have. Is it the best solution? Maybe not. But that's what we're looking for here I guess :) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
By the way, I liked the sound of the arena/pool tree - really good idea. Thomas Lee wrote: >Niko Matsakis wrote: > > > >>>Boy am I wanting RAII from C++ for automatic freeing when scope is >>>left. Maybe we need to come up with a similar thing, like all memory >>>that should be freed once a scope is left must use some special struct >>>that stores references to all created memory locally and then a free >>>call must be made at all exit points in the function using the special >>>struct. Otherwise the pointer is stored in the arena and handled >>>en-mass later. >>> >>> >>> >>> >>That made sense. I think I'd be opposed to what you describe here >>just because I think anything which *requires* that cleanup code be >>placed on every function is error prone. >> >> >> >> >> >Placing it in every function isn't really the problem: at the moment >it's more the fact we have to keep track of too many variables at any >given time to properly deallocate it all. Cleanup code gets tricky very >fast. > >Then it gets further complicated by the fact that >stmt_ty/expr_ty/mod_ty/etc. deallocate members (usually asdl_seq >instances in my experience) - so if a construction takes place, all of a >sudden you have to make sure you don't deallocate those members a second >time in the cleanup code :S it gets tricky very quickly. > >Even if it meant we had just one function call - one, safe function call >that deallocated all the memory allocated within a function - that we >had to put before each and every return, that's better than what we >have. Is it the best solution? Maybe not. But that's what we're looking >for here I guess :) > >___ >Python-Dev mailing list >Python-Dev@python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com > > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
Just messing around with some ideas. I was trying to avoid the ugly macros (note my earlier whinge about a learning curve) but they're the cleanest way I could think of to get around the problem without resorting to a mass deallocation right at the end of the AST run. Which may not be all that bad given we're going to keep everything in-memory anyway until an error occurs ... anyway, anyway, I'm getting sidetracked :) The idea is to ensure that all allocations within a single function are made using the pool so that a function finishes what it starts. This way, if the function fails it alone is responsible for cleaning up its own pool and that's all. No funkyness needed for sequences, because each member of the sequence belongs to the pool too. Note that the stmt_ty instances are also allocated using the pool. This breaks interfaces all over the place though. Not exactly a pretty change :) But yeah, maybe somebody smarter than I will come up with something a bit cleaner. -- /* snip! */ #define AST_SUCCESS(pool, result) return result #define AST_FAILURE(pool, result) asdl_pool_free(pool); return result static stmt_ty ast_for_try_stmt(struct compiling *c, const node *n) { /* with the pool stuff, we wouldn't need to declare _all_ the variables here either. I'm just lazy. */ asdl_pool *pool; int i; const int nch = NCH(n); int n_except = (nch - 3)/3; stmt_ty result_st = NULL, except_st = NULL; asdl_seq *body = NULL, *orelse = NULL, *finally = NULL; asdl_seq *inner = NULL, *handlers = NULL; REQ(n, try_stmt); /* c->pool is the parent of pool. when pool is freed (via AST_FAILURE), it is also removed from c->pool's list of children */ pool = asdl_pool_new(c->pool); if (pool == NULL) AST_FAILURE(pool, NULL); body = ast_for_suite(c, CHILD(n, 2)); if (body == NULL) AST_FAILURE(pool, NULL); if (TYPE(CHILD(n, nch - 3)) == NAME) { if (strcmp(STR(CHILD(n, nch - 3)), "finally") == 0) { if (nch >= 9 && TYPE(CHILD(n, nch - 6)) == NAME) { /* we can assume it's an "else", because nch >= 9 for try-else-finally and it would otherwise have a type of except_clause */ orelse = ast_for_suite(c, CHILD(n, nch - 4)); if (orelse == NULL) AST_FAILURE(pool, NULL); n_except--; } finally = ast_for_suite(c, CHILD(n, nch - 1)); if (finally == NULL) AST_FAILURE(pool, NULL); n_except--; } else { /* we can assume it's an "else", otherwise it would have a type of except_clause */ orelse = ast_for_suite(c, CHILD(n, nch - 1)); if (orelse == NULL) AST_FAILURE(pool, NULL); n_except--; } } else if (TYPE(CHILD(n, nch - 3)) != except_clause) { ast_error(n, "malformed 'try' statement"); AST_FAILURE(pool, NULL); } if (n_except > 0) { /* process except statements to create a try ... except */ handlers = asdl_seq_new(pool, n_except); if (handlers == NULL) AST_FAILURE(pool, NULL); for (i = 0; i < n_except; i++) { excepthandler_ty e = ast_for_except_clause(c, CHILD(n, 3 + i * 3), CHILD(n, 5 + i * 3)); if (!e) AST_FAILURE(pool, NULL); asdl_seq_SET(handlers, i, e); } except_st = TryExcept(pool, body, handlers, orelse, LINENO(n)); if (except_st == NULL) AST_FAILURE(pool, NULL); /* if a 'finally' is present too, we nest the TryExcept within a TryFinally to emulate try ... except ... finally */ if (finally != NULL) { inner = asdl_seq_new(pool, 1); if (inner == NULL) AST_FAILURE(pool, NULL); asdl_seq_SET(inner, 0, except_st); result_st = TryFinally(pool, inner, finally, LINENO(n)); if (result_st == NULL) AST_FAILURE(pool, NULL); } else result_st = except_st; } else { /* no exceptions: must be a try ... finally */ assert(orelse == NULL); assert(finally != NULL); result_st = TryFinally(pool, body, finally, LINENO(n)); if (result_st == NULL) AST_FAILURE(pool, NULL); } /* pool deallocated when c->pool is deallocated */ return AST_SUCCESS(pool, result_st); } Nick Coghlan wrote: >Thomas Lee wrote: > > >>As the writer of the crappy code that sparked this conversation, I feel >>I should say something :) >> >> > >Don't feel bad about it.
Re: [Python-Dev] Memory management in the AST parser & compiler
Portability may also be an issue to take into consideration: http://www.eskimo.com/~scs/C-faq/q7.32.html http://archives.neohapsis.com/archives/postfix/2001-05/1305.html Cheers, Tom Alex Martelli wrote: >On Nov 17, 2005, at 12:46 PM, Brett Cannon wrote: >... > > >>>alloca? >>> >>>(duck) >>> >>> >>> >>But how widespread is its support (e.g., does Windows have it)? >> >> > >Yep, spelled with a leading underscore: >http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ >vclib/html/_crt__alloca.asp > > >Alex > >___ >Python-Dev mailing list >Python-Dev@python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com > > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
Neil Schemenauer wrote: >Fredrik Lundh <[EMAIL PROTECTED]> wrote: > > >>Thomas Lee wrote: >> >> >> >>>Even if it meant we had just one function call - one, safe function call >>>that deallocated all the memory allocated within a function - that we >>>had to put before each and every return, that's better than what we >>>have. >>> >>> >>alloca? >> >> > >Perhaps we should use the memory management technique that the rest >of Python uses: reference counting. I don't see why the AST >structures couldn't be PyObjects. > > Neil > > > I'm +1 for reference counting. It's going to be a little error prone initially (certainly much less error prone than the current system in the long run), but the pooling/arena idea is going to screw with all sorts of stuff within the AST and possibly in bits of Python/compile.c too. At least, all my attempts wound up looking that way :) Cheers, Tom >___ >Python-Dev mailing list >Python-Dev@python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >http://mail.python.org/mailman/options/python-dev/krumms%40gmail.com > > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory management in the AST parser & compiler
Nick Coghlan wrote: >Greg Ewing wrote: > > >>Neal Norwitz wrote: >> >> >> >>>I'm mostly convinced that using PyObjects would be a good thing. >>>However, making the change isn't free as all the types need to be >>>created and this is likely quite a bit of code. >>> >>> >>Since they're all so similar, perhaps they could be >>auto-generated by a fairly simple script? >> >>(I'm being very careful not to suggest using Pyrex >>for this, as I can appreciate the desire not to make >>such a fundamental part of the core dependent on it!) >> >> > >The ast C structs are already auto-generated by a Python script (asdl_c.py, to >be precise). The trick is to make that script generate full PyObjects rather >than the simple C structures that it generates now. > > > I was actually trying this approach last night. I'm back to it this evening, working with the ast-objects branch. I'll push a patch tonight with whatever I get done. Quick semi-related question: where are the marshal_* functions called? They're all static in Python-ast.c and don't seem to be actually called anywhere. Can we ditch them? >The second step is to then modify ast.c to use the new structures. A branch >probably wouldn't help much with initial development (this is a "break the >world, check in when stuff compiles again" kind of change, which is hard to >split amongst multiple people), but I think it would be of benefit when >reviewing the change before moving it back to the trunk. > > > Based on my (limited) experience and your approach, compile.c may also need to be modified a little too (this should be pretty trivial). Cheers, Tom ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com