Re: Anonymus functions revisited : tuple actions
On 24 Mar 2005 01:58:48 -0800, "Kay Schluehr" <[EMAIL PROTECTED]> wrote: >I personally don't like using exec and eval for stuff different from >evaluating user input. I lean the other way. I never want to use user impute for eval and exec. Way too risky. But limited use, that is not user input, should be ok. The programmer can do anything he/she wants anyways, so exec or eval is just another tool in that respect. But I am trying to find a way to limit the code objects. To make it safe I need to hide all the name spaces except locals within the function. Not sure if it's possible.. um, not easy. I know it's possible. >You rely much on "evaluate statement on the line" by adapting >conventional Python syntax. I think one can go a bit further breaking >the syntactical prejudices and apply tuple-actions :) > >Playing a bit with tuple-actions shows that the concept is quite >powerfull and can be used to create simple statements. > >First of all the semantics has to be patched: > >We have > > (x,y,z=0) -> (x,y,z) > >as a tuple assignment > > ((x,y,z=0)->(x,y,z))(a,b,c) = (x=a,y=b,z=c) > >But it is not clear what > > (x,y,z=0) -> x*y-z > >actually means? I think I'm following you, but I'm not sure how much of it is diagramic or meant to be actual code. Are you suggesting to use the -> as a operator? If so then this is backwards to the = operator. x*y-z <- (x,y,z=0) Would be more consistent to current syntax. Then the <- would have the meaning of translate instead of assign. Or possibly an operator for adapt? Guido wants to use the -> in function definitions to specify return value types. Although I see problems with that too. >Proposal: > > (x,y=0) -> x*y => ((x,y=0)->x*y) (a,b) -> (x=a,y=b),a*b > (x,y=0) -> (x*y) => ((x,y=0)->(x*y))(a,b) -> (x=a*b,y=b) > >So (x,y=0) -> x*y is appending the result to the argument tuple. > >Remark: this is isomorph to > > (x,y=0,res=None) -> ((x,y),x*y) > >but it becomes harder now to identify > > (x,y,res=None) -> ((x,y),x*y) >with > x*y > >Provide a compiler-hint: > >(x,y,()) -> x*y I'm not following you completely here. It appears your trying to create a system to map different arguments to equations in a indirect way. >Now we are ready for a few examples: > > >default value: > (i) -> (0) # i = 0 > >inplace increment: > (i) -> i+1 # i = i+1 > >conditional expression: > (i) -> i<3 # i,res = i,i<3 Lost me again, what is res? >simple transformation: > (res) -> (res+i**2) # res = res+i**2 > > >Define a While loop as a function: > >def While( par, cond, change, action): >par(None) # create default >res = 0 >while cond(par)[1]: >action(res) >change(par) >return res > >Let's apply it to some tuple actions: > >While((i)->(0), (i)->i<3, (i)->(i+1), (res)->(res+i**2)) > >and evaluate While stepwise: > >1. par(None) <=> (i)->(0)(None) # (i) = (0) >2. cond(par)[1] <=> (i)->i<3(0)# (i,c) = (0,True) >3. action(res) <=> (res) -> (res+i**2)(0) # (res) = (0) >4. change(par) <=> (i)->(i+1)(0) # (i) = (1) >5. cond(par)[1] <=> (i)->i<3(1)# (1,c) = (0,True) >6. action(res) <=> (res) -> (res+i**2)(0) # (res) = (1) >7. change(par) <=> (i)->(i+1)(1) # (i) = (2) >5. cond(par)[1] <=> (i)->i<3(2)# (2,c) = (0,True) >6. action(res) <=> (res) -> (res+i**2)(1) # (res) = (5) >7. change(par) <=> (i)->(i+1)(2) # (i) = (3) >5. cond(par)[1] <=> (i)->i<3(2)# (2,c) = (0,False) >break > >=> res = 5 > > >If we customize the other control flow primitives For and If it should >be possible to create a little language only by using this primitives. > >It is obvious by definition of our While that we can replace arguments >on the fly: > >conds = [(i)->i<3, (i)->i+2<7, (i)->i>=0] > >[ While((i)->(0), cond, (i)->(i+1), (res)->(res+i**2)) for cond in >conds] > >=> [5,29,0] > > >Wouldn't it be fun to use in Python? > >Only drawback: does not look like executable pseudo-code anymore :( > > >Regards Kay I think I get the gist of what you are trying to do, but I can't follow it entirely. Well it looks like an interesting puzzle, but you'll need to go a little slower for some of us. I tired though. :) Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: Anonymus functions revisited : tuple actions
On 24 Mar 2005 22:16:10 -0800, "Kay Schluehr" <[EMAIL PROTECTED]>
wrote:
>It's all developed during this discussion. Sometimes I'm a bit
>surprised were it goes.
I enjoy exploring ideas this way. Many times it leads to dead ends or
you just end up with a long way back to where you started, but
sometimes you get a surprise, and almost always a deeper understanding
of the subject. :)
>To make my intention clear for another time, also for George who
>mistrusts these exercises alltogether. I want to derive a syntax and
>semantics for anonymus functions ( called "tuple-actions" ) that are
>generalizations of rules that are already used implicitely within
>Python e.g. tuple-unpacking. This is done by progressive interpretation
>and extension. They are not there by means of an accident, what Guido
>claims about the current lambda which he feels to be sticked onto the
>language.
Looking at the syntax of lambda, I think I agree with Guido.
result = lambda *args: expression
It's works like a function, but is formatted like a for or if
statement. It should have been something like this.
result = lambda{ *args: expression}
Another interesting possibility by exploring ideas and concepts. :)
Using a dictionary instead of ()'s to pass the arguments and
expressions. This would simplify parsing it, because it could be
handled as an an object instead of having to parse the args and
expression first.
What if you could:
x = lambda{ x, y: x+y}
Hmm comma creates a problem here. so...
x = lambda{ (x,y): x+Y }
This is more consistent with python syntax and makes more since. the
args are in a tuple as they would be in function.
x = lambda{ (x,y): x+y } is same as x = function(x,y): return x+y
Could this work too?:
x, y, z = lambda{ (x,y): x+y, (x,z):x+z, (x,v):x+v }
Short hand for:
x,y,z = lambda{(x,y):x+y}, lambda{(x,z):x+z, lambda{(x,v):x+v}
For compatibility purposes, You would need to give it a different
name:
af, afn, ann, lamb, lam, lm, ?
Or just call it what it is.. function{(args):expression}
Then it would be easy to explain, teach, and remember.
Ron_Adam
--
http://mail.python.org/mailman/listinfo/python-list
Re: Anonymus functions revisited
On 25 Mar 2005 10:09:50 GMT, Duncan Booth
<[EMAIL PROTECTED]> wrote:
>I've never found any need for an is_defined function. If in doubt I just
>make sure and initialise all variables to a suitable value before use.
>However, I'll assume you have a good use case.
I admit that that is the better practice. George's example was the
conversion of data from one form to another where the data is mixed
with complete and incomplete items. And Kay is looking at tuple
unpacking.
It's hard to beat try/except for these situations though. :)
I cleaned it up some more and figured out the proper use of
_getframe(). So no lambdas, and no passing of locals needed., and it
checks for globals and builtins before defining the default value so
as not to over write a readable value.
I'm not sure what the best behavior should be. Maybe a routine to
tell where a name is, ie.. local, global, builtin, or a writable
global? maybe isa() return the location or None.? I think that would
be better.
The best purpose for utilities like these is for debugging and getting
feedback about the environment. So I'm thinking of putting them in a
module for that purpose. I have a subroutine to list all the names
attached to an object. I think I can add that a bit now too.
#---Here's the code-
import sys
def isa(v):
"""
Check if a varable exists in the current
(parent to this function), global, or
builtin name spaces.
use: bool = isa( str )
returns True or False
"""
plocals = sys._getframe(1).f_locals
if plocals.has_key(v) or globals().has_key(v) or \
__builtins__.locals().has_key(v):
return True
return False
def ifno(v, obj=None):
"""
Check if a varable does not exists, return a
default value, otherwise return the varable obj.
use: obj = ifno( str [,obj=None] )
if str exist, returns str's object
if str does not exist, returns specified object
"""
plocals = sys._getframe(1).f_locals
if plocals.has_key(v):
return plocals[v]
if globals().has_key(v):
return globals()[v]
if __builtins__.locals().has_key(v):
return __builtins__.locals()[v]
return obj
def test():
"""
Test isa() and ifno() functions:
"""
# Totally useless routine. ;)
import random
for n in range(25):
# Delete a random x,y,z coordinate to
# simulate an unrealiabe data source.
d = random.choice([1,2,3])
if d==1:
if isa('x'): del x
elif d==2:
if isa('y'): del y
else:
if isa('z'): del z
# Replace the missing Varible with a random number.
r = int(random.random()*100)
x, y, z = ifno('x',r), ifno('y',r), ifno('z',r)
print x, y, z
if __name__ == '__main__':
test()
#-
--
http://mail.python.org/mailman/listinfo/python-list
Re: Python 2.4 | 7.3 The for statement
>-- Your code >foo = 0 >for item1 in range(10): > for item2 in range(10): >foo = item1 + item2 >if foo == 2: > print "Let's see" > break # let's go > if (item1 + item2) == 2: >break # one more time >print foo The outer loop never reaches 1, so we can get rid of it along with the second if statement, the additions aren't needed either. So what you have left is this. for foo in range(3): pass print "Let's see" print foo Which is the same as: print "let's see\n", foo I know that isn't the point. Just couldn't resist. ;) Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: Anonymus functions revisited : tuple actions
On Fri, 25 Mar 2005 18:58:27 +0100, Reinhold Birkenfeld
<[EMAIL PROTECTED]> wrote:
>Ron_Adam wrote:
>
>> What if you could:
>>
>> x = lambda{ x, y: x+y}
>> Hmm comma creates a problem here. so...
>
>>>> from __future__ import braces
>SyntaxError: not a chance
>>>>
>
>Reinhold ;)
LOL, :-)
Is that to discourage people from wanting to use them as block
designators?
--
http://mail.python.org/mailman/listinfo/python-list
Re: Anonymus functions revisited
On Fri, 25 Mar 2005 17:09:38 -0500, "George Sakkis" <[EMAIL PROTECTED]> wrote: >I posted a recipe in python cookbook >(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/392768) for the >subproblem I was interested >in initially (variable-length iterable unpacking), and I prefer it over >explicit try/except (but of >course I'm biased :-)). Kay is proposing something even more general and >powerful, and it will be >interesting to see if all this brainstorming can be brought forward more >'formally', e.g. at a PEP >or pre-PEP level. > >Regards, >George Looks good George. :) I'm not sure what Kay is trying for, but it does look interesting. I'm all for new features as long as they are consistent and easy to use, and easy to remember as well. I keep finding ways to improve the little routines I'm playing with. ;) I'll probably post them in the cookbook also, and maybe put them together in a mod. A few more pieces and I should be able to build a name space explorer which I think will be good for debugging programs. I'm thinking you could put it in the program where you are having problems and it will open a tree type window where you can examine all the names and objects at that point. When done, close it and the programs continues. It will give you a little more info than sticking print statements hear and there. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: str vs dict API size (was 'Re: left padding zeroes on a string...')
On Fri, 25 Mar 2005 18:06:11 -0500, "George Sakkis" <[EMAIL PROTECTED]> wrote: > >I'm getting off-topic here, but it strikes me that strings have so many >methods (some of which are >of arguable utility, e.g. swapcase), while proposing two useful methods >(http://tinyurl.com/5nv66) >for dicts -- a builtin with a considerably smaller API than str -- meets so >much resistance. Any >insight ? > >George > I did a quick check. >>> len(dir(str)) 63 >>> len(dir(int)) 53 >>> len(dir(float)) 45 >>> len(dir(dict)) 40 >>> len(dir(list)) 42 >>> len(dir(tuple)) 27 We need more tuple methods! jk ;) Looks like the data types, strings, int an float; have more methods than dict, list, and tuple. I would expect that because there is more ways to manipulate data than is needed to manage containers. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Grouping code by indentation - feature or ******?
On Fri, 25 Mar 2005 11:31:33 -0800, James Stroud <[EMAIL PROTECTED]> wrote: >On Friday 25 March 2005 08:39 am, Ivan Van Laningham wrote: >> As far as grouping by indentation goes, it's why I fell in love with >> Python in the first place. Braces and so on are just extraneous cruft >> as far as I'm concerned. It's the difference between Vietnamese verbs >> and Latin verbs;-) > >Say I buy into the indentation ideology. Python then has this inconsistency: : > >Why do we need : at the end of our if and for loops? I spend approximately 6 >minutes/100 lines of code going back and finding all of the times I missed :. >Is it for cheating? > >if False: print ":" > >Now, what happened to the whitespace idea here? This code seems very >unpythonic. I think : is great for slices and lamda where things go on one >line, but to require it to specify the start of a block of code seems a >little perlish. You really don't need a period at the end of a sentences A double space and a capital is enough to tell you where one sentence ends and the next one starts Yet, I find the presence of the period makes written language easier to read in the same way the colon in python makes code easer to read ;) Ron -- http://mail.python.org/mailman/listinfo/python-list
Turn of globals in a function?
Is there a way to hide global names from a function or class? I want to be sure that a function doesn't use any global variables by mistake. So hiding them would force a name error in the case that I omit an initialization step. This might be a good way to quickly catch some hard to find, but easy to fix, errors in large code blocks. Examples: def a(x): # ... x = y # x is assigned to global y unintentionally. # ... return x def b(x): # hide globals somehow # ... x = y# Cause a name error # ... return x y = True >>>a(False): True >>>b(False): *** name error here *** Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: Turn of globals in a function?
On Sat, 26 Mar 2005 12:18:39 -0800, Michael Spencer
<[EMAIL PROTECTED]> wrote:
>Ron_Adam wrote:
>> Is there a way to hide global names from a function or class?
>>
>> I want to be sure that a function doesn't use any global variables by
>> mistake. So hiding them would force a name error in the case that I
>> omit an initialization step. This might be a good way to quickly
>> catch some hard to find, but easy to fix, errors in large code blocks.
>>
>> Examples:
>>
>> def a(x):
>> # ...
>> x = y # x is assigned to global y unintentionally.
>> # ...
>> return x
>>
>> def b(x):
>> # hide globals somehow
>> # ...
>> x = y# Cause a name error
>> # ...
>> return x
>>
>>
>> y = True
>>
>>
>>>>>a(False):
>>
>> True
>>
>>
>>>>>b(False):
>>
>> *** name error here ***
>>
>>
>> Ron_Adam
>>
>>
>For testing, you could simply execute the function in an empty dict:
>
> >>> a = "I'm a"
> >>> def test():
> ... print a
> ...
> >>> test()
> I'm a
> >>> exec test.func_code in {}
> Traceback (most recent call last):
>File "", line 1, in ?
>File "", line 2, in test
> NameError: global name 'a' is not defined
> >>>
I didn't know you could do that. Interesting. :)
I was hoping for something in line that could use with an assert
statement. But this is good too, I'll have to play around with it a
bit. Thanks.
Ron
>This would get more complicated when you wanted to test calling with
>parameters,
>so with a little more effort, you can create a new function where the globals
>binding is to an empty dict:
>
> >>> from types import FunctionType as function
> >>> testtest = function(test.func_code, {})
> >>> testtest()
> Traceback (most recent call last):
>File "", line 1, in ?
>File "", line 2, in test
> NameError: global name 'a' is not defined
> >>>
>
>HTH
>
>Michael
--
http://mail.python.org/mailman/listinfo/python-list
Re: Turn off globals in a function?
>If you put the above def b in e.g. a_module.py, >and do a (untested ;-) > >from a_module import b > >instead of defining it locally, then the global references >from b (and whatever else you import from a_module) >should be to the global dict defined for a_module (i.e., its >outermost scope), not to the globals where you do the import. > >>y = True >> >>>>>a(False): >>True >Should work if you define a in place having same scope as the y assignment >> >>>>>b(False): >>*** name error here *** >> >UIAM it should do this if you import b as above. > >Regards, >Bengt Richter Good suggestion. Thanks. I was somewhat aware of the modular scope, but was looking for way to apply it on a more local level. Michael's suggestion looks interesting for that. Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: Turn of globals in a function?
On 26 Mar 2005 22:51:14 -0800, [EMAIL PROTECTED] (Oren
Tirosh) wrote:
>Ron_Adam <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>...
>> Is there a way to hide global names from a function or class?
>>
>> I want to be sure that a function doesn't use any global variables by
>> mistake. So hiding them would force a name error in the case that I
>> omit an initialization step. This might be a good way to quickly
>> catch some hard to find, but easy to fix, errors in large code blocks.
>
>def noglobals(f):
>. import new
>. return new.function(
>. f.func_code,
>. {'__builtins__':__builtins__},
>. f.func_name,
>. f.func_defaults,
>. f.func_closure
>. )
>
>You can use it with the Python 2.4 @decorator syntax:
>
>@noglobals
>def a(...):
>. # code here
Cool! I haven't played with decorators yet. :)
I noticed the 'new' module is depreciated. It referred me to call the
object type directly instead. So this is probably the better way.
def noglobals(f):
return type(f)(
f.func_code,
{'__builtins__':__builtins__},
f.func_name,
f.func_defaults,
f.func_closure )
@noglobals
def a():
global x
try: x
except: x=0
x += 1
return x
x = 5
for n in range(10):
print a()
print x # x is still 5
So this is another, but longer, way to do a generator.
>>> print type(a).__doc__
function(code, globals[, name[, argdefs[, closure]]])
Create a function object from a code object and a dictionary.
The optional name string overrides the name from the code object.
The optional argdefs tuple specifies the default argument values.
The optional closure tuple supplies the bindings for free variables.
>>>
What are 'free variables'?
And is there a way to directly read what names in a function are set
with the global statement? (Other than looking at the monitor. ;)
Ron_Adam
--
http://mail.python.org/mailman/listinfo/python-list
Re: Python List Issue
On Sun, 27 Mar 2005 09:01:20 GMT, "Nick L" <[EMAIL PROTECTED]> wrote: >I've hit a brick wall on something that I'm guessing is pretty simple but >it's driving me nuts. Yes, I've ran across that too a few times. >How on earth can I make a complete seperate copy of a list with out it >being a attached to the original in any way shape or form so that I can >modifiy if at will and not worry about the original? This routine copies a list of lists. # Makes a copy of a list of lists # Containing simple data. def copylistlist(alist): if type(alist) is list: copy = [] for i in alist: if type(i) is list: i = copylistlist(i) copy.append(i) return copy bob = [[[0, 0]]] final = copylistlist(bob) print 'bob:'bob print 'Final:'final This still doesn't create new items within the new list. but with literal data consisting of letters and numbers, it will work. If you are working with a data tree, you may be able to modify this to do what you want. Just add a test in the inner loop for the data you want to modify. >Any ideas, suggestions, comments are greatly appreciated >thanks > >Nick Hope that helps. Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: String Splitter Brain Teaser
On Sun, 27 Mar 2005 14:39:06 -0800, James Stroud
<[EMAIL PROTECTED]> wrote:
>Hello,
>
>I have strings represented as a combination of an alphabet (AGCT) and a an
>operator "/", that signifies degeneracy. I want to split these strings into
>lists of lists, where the degeneracies are members of the same list and
>non-degenerates are members of single item lists. An example will clarify
>this:
>
>"ATT/GATA/G"
>
>gets split to
>
>[['A'], ['T'], ['T', 'G'], ['A'], ['T'], ['A', 'G']]
Here's two ways without using regular expression. Both about the
same.
s = list("ATT/GATA/G")
result = []
while len(s)>0:
a = [s.pop(0)]
if s[0] == '/':
b = s.pop(0)
a.append(s.pop(0))
result.append(a)
print result
[['A'], ['T'], ['T', 'G'], ['A'], ['T'], ['A', 'G']]
s = "ATT/GATA/G"
result = []
while len(s)>0:
if s[1:2] == '/':
result.append([s[0],s[2]])
s = s[3:]
else:
result.append([s[0]])
s = s[1:]
print result
[['A'], ['T'], ['T', 'G'], ['A'], ['T'], ['A', 'G']]
--
http://mail.python.org/mailman/listinfo/python-list
Re: numbering variables
On Mon, 28 Mar 2005 13:39:17 +0200, remi <[EMAIL PROTECTED]> wrote: >Hello, > >I have got a list like : mylist = ['item 1', 'item 2','item n'] and >I would like to store the string 'item1' in a variable called s_1, >'item2' in s_2,...,'item i' in 's_i',... The lenght of mylist is finite ;-) >Any ideas ? >Thanks a lot. >Rémi. Why not just access the list by index? Just start with zero instead of 1. mylist = ['item 1', 'item 2','item n'] mylist[0] is 'item 1' mylist[1] is 'item 2' . . mylist[n-1] is 'item n' Ron Adam -- http://mail.python.org/mailman/listinfo/python-list
Module function can't see globals after import.
Using this example: [in mymodule] def whatisx(): try: print 'x is',x except: print 'There is no x' [in mymain] from mymodule import printx x = 10 whatisx() prints -> There is no x Is there a way to tell the imported function printx to use mymain's globals instead of it's own copy without passing it as an argument? I'm experimenting with a function that limits both the reading and writing of globals and builtins to only specific names. But after it's imported it has it's own globals so it only works if it's copied to the current module. def howareyou(): allow_globals( rw_globals='y', r_globals='x', builtins='int') global y # This is still needed. # ... lots more code. y = int(x*y) return 'I am fine.' x = 5 y = 10 print howareyou() This function has read/write access to y, can only read x, and has only read access to int(). It will generate an error if an attempt is made to use anything other than what is specified. Using: allow_globals() # No globals, and all builtins. allow_globals(builtins=None) # No globals, and no builtins. Still have a few minor issues to work out. Like getting it to work correctly after it's imported. :/ Ron_Adam -- http://mail.python.org/mailman/listinfo/python-list
Re: Module function can't see globals after import.
On Mon, 28 Mar 2005 20:46:32 +0300, Christos "TZOTZIOY" Georgiou <[EMAIL PROTECTED]> wrote: >On Mon, 28 Mar 2005 17:19:42 GMT, rumours say that Ron_Adam ><[EMAIL PROTECTED]> might have written: > >>Is there a way to tell the imported function printx to use mymain's >>globals instead of it's own copy without passing it as an argument? > >No, but if you insist on working with globals for some reason of your >own, in the module you can: Actually I avoid them where ever and when ever possible. ;) This is more of an attempt to help find and prevent problems of that sort. I've started to put together a took kit package of sorts for finding and preventing problems. It needs to look at the name spaces to work correctly. >import __main__ > >and access the main modules globals (eg 'x') as: > >__main__.x Thanks, 'That worked fine.! :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Little Q: how to print a variable's name, not its value?
On 28 Mar 2005 23:01:34 -0800, "Dan Bishop" <[EMAIL PROTECTED]> wrote: def print_vars(vars_dict=None): >...if vars_dict is None: >... vars_dict = globals() >...for var, value in vars_dict.items(): >... print '%s = %r' % (var, value) >... myPlace = 'right here' myTime = 'right now' print_vars() >print_vars = >__builtins__ = >myTime = 'right now' >myPlace = 'right here' >__name__ = '__main__' >__doc__ = None Fred = 5 John = 8 Winner = John Both John and Winner are pointing to the literal '8'. Mixing data and program code, ie.. variable names as data, is not a good idea. Dictionaries are one of pythons best features. ;) Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: good design & method calls
On Tue, 29 Mar 2005 09:09:37 -0500, Charles Hartman <[EMAIL PROTECTED]> wrote: >I know the answer to this is going to be "It depends . . .", but I want >to get my mind right. In Fowler's *Refactoring* I read: "Older >languages carried an overhead in subroutine calls, which deterred >people from small methods" (followed by the basic "Extract Method" >advice). In Skip Montanaro's "Python Performance Tips" >(http://manatee.mojam.com/~skip/python/fastpython.html) I read: ". . . >use local variables wherever possible. If the above loop is cast as a >function, append and upper become local variables. Python accesses >local variables much more efficiently than global variables." > >These two pieces of advice imply opposite kinds of code revisions. >Obviously they have different purposes, and both are right at different >times. I wonder if anyone has some wisdom about how to think about when >or how often to do which, how to balance them ultimately, and so on. > >Charles Hartman >Professor of English, Poet in Residence >the Scandroid is at: http://cherry.conncoll.edu/cohar/Programs >http://villex.blogspot.com It depends... ;) Converting small functions to inline, usually should only be done in the inner most loops to optimize performance if it's needed. Moving calculations out of those loops by doing them ahead of time is also good. It's good practice in python to put all of your code in functions or class's even if it's a single main() function. def main() (program code) main() Then you avoid the slower globals unless you declare them with the global statement. If main, or any other functions, get too big or complex, split them up as needed. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Little Q: how to print a variable's name, not its value?
On Tue, 29 Mar 2005 11:23:45 -0500, Bill Mill <[EMAIL PROTECTED]>
wrote:
>On Tue, 29 Mar 2005 14:34:39 GMT, Ron_Adam <[EMAIL PROTECTED]> wrote:
>> On 28 Mar 2005 23:01:34 -0800, "Dan Bishop" <[EMAIL PROTECTED]> wrote:
>>
>> >>>> def print_vars(vars_dict=None):
>> >...if vars_dict is None:
>> >... vars_dict = globals()
>> >...for var, value in vars_dict.items():
>> >... print '%s = %r' % (var, value)
>> >...
>> >>>> myPlace = 'right here'
>> >>>> myTime = 'right now'
>> >>>> print_vars()
>> >print_vars =
>> >__builtins__ =
>> >myTime = 'right now'
>> >myPlace = 'right here'
>> >__name__ = '__main__'
>> >__doc__ = None
>>
>> Fred = 5
>> John = 8
>> Winner = John
>>
>> Both John and Winner are pointing to the literal '8'.
>
>ummm, yes, of course they are. What's your point?
Hi Bill,
My point is if you look up the name and print it, you may get.
Instead of:
Fred has 5 points
John has 8 points
You could get:
Fred has 5 points
Winner has 8 points
Or something else depending on how many references you made to the
value 8.
>> Mixing data and program code, ie.. variable names as data, is not a
>> good idea.
>
>Down with eval! Exile exec! A pox on both their houses!
>
>(i.e. I respectfully disagree that mixing data with program code is a bad idea)
(I respectfully acknowledged your opinion.)
To be fair, it's not always bad. But in most cases it is better not
too.
I wasn't referring to using exec or eval, but to directly using data
values in the program code.
Here's an example of mixing data and code. If I write a program that
checks a list for specific names and prints corresponding value for
each.
I would not do this: ( Lots of stuff wrong with this example! )
playerlist = ['John','Bob','Fred']
John = 6
Bob = 8
Fred = 0
for name in playerlist:
if item == 'John':
print 'John', John
if item == 'Bob':
print 'Bob', Bob
This is only good for one set of data, and each time my data changes,
I would need to rewrite the program code also. This has data as
program code in both the variables and in the comparisons.
Not mixing data and code:
playerlist = {'John':6, 'Bob':8, 'Fred',0}
players = ['John', 'Bob']
for name in players:
print name, playerlist[name]
This does the same thing as above, but data and program code are not
mixed. It's much easier to maintain and reuse.
Ron
>Peace
>Bill Mill
>bill.mill at gmail.com
--
http://mail.python.org/mailman/listinfo/python-list
Re: Little Q: how to print a variable's name, not its value?
On Tue, 29 Mar 2005 14:58:45 -0500, Bill Mill <[EMAIL PROTECTED]> wrote: >> >> Or something else depending on how many references you made to the >> value 8. > >Yes, this is true, assuming that he looks for keys with the value 8 in >locals(). It's not necessarily true if there's a way to ask python >what the name of John is, which is what the OP was asking. I just >wanted you to explicitly say what you were implying so that we could >discuss it. ok, :) I should have explained my position better the first time. I was a bit too brief. Unfortunately, while class's and functions have a __name__ attribute, simple data types do not. I've been playing around with a way to explore name spaces, but once you drop into class's, and functions, the references can lead you into an endless loops. > >Yup, I meant to say that I disagree that mixing data with program code >is *always* a bad idea. I had a "d'oh!" moment when I hit send. > I do that more often than I like. Maybe I should have said 'is often' not a good idea. > >Naturally, I wasn't suggesting that anyone (shudder) do things like >your examples of poor code. I had a much smaller point, about which I >was not clear: Sometimes, it is handy to mix code and data. There >*are* legitimate uses of reflection, eval, and exec. I did't think you would suggest that, but I thought it was a good chance to clarify what I meant the first time. If you missed my point, then so did others. An obvious example is sometimes the best way to demonstrate a basic concept. There are good uses for eval and exec. Ultimately it's up to the programmer to decide the best use of the tools. I just like to keep my tools(code), and my parts(data) in separate bins. ;) Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Turn of globals in a function?
On Thu, 31 Mar 2005 16:28:15 +1200, Greg Ewing
<[EMAIL PROTECTED]> wrote:
>Oren Tirosh wrote:
>> def noglobals(f):
>> . import new
>> . return new.function(
>> . f.func_code,
>> . {'__builtins__':__builtins__},
>> . f.func_name,
>> . f.func_defaults,
>> . f.func_closure
>> . )
>
>Be aware that this will render the function incapable
>of seeing *any* globals at all, including other
>functions and classes defined in the same module --
>which you may find rather inconvenient!
Developing this idea further...
This allows a programmer to specify what globals to allow read and or
writes.
Cheers,
Ron
#---start---
# useglobals.py
"""
A function to specify what globals and builtins
a function may access.
Author: Ronald Adam
"""
def useglobals(rw_globals=None, r_globals=None, builtins=True):
#import dis
import sys
write_list = []
read_list = []
if rw_globals != None:
rw_globals = rw_globals.replace(' ','')
write_list = rw_globals.split(',')
if r_globals != None:
r_globals = r_globals.replace(' ','')
read_list = r_globals.split(',')
if builtins == True:
read_list.extend(dir(__builtins__))
elif builtins != None:
builtins = builtins.replace(' ','')
read_list.extend(builtins.split(','))
# Add own name to read list.
read_list.append(sys._getframe(0).f_code.co_name)
read_list.extend(write_list)
#print read_list, write_list
names = sys._getframe(1).f_code.co_names
code = sys._getframe(1).f_code.co_code
#print dis.disassemble(sys._getframe(1).f_code)
i = 0
while i < len(code):
#print ord(code[i])
op = ord(code[i])
if op == 116: # dis.opmap['LOAD_GLOBAL']
oparg = ord(code[i+1]) + ord(code[i+2]) * 256
if str(names[oparg]) not in read_list:
raise NameError, "read from global name %s, detected"
% names[oparg]
elif op == 97: # dis.opmap['STORE_GLOBAL']
oparg = ord(code[i+1]) + ord(code[i+2]) * 256
if names[oparg] not in write_list:
raise NameError, "write to global name %s, detected" %
names[oparg]
if op >= 90:# dis.HAVE_ARGUMENT
i += 3 # Not sure if this is always the same?
else:
i += 1
if __name__ == '__main__':
"""
Test useglobals() function. Change values to test
for error catching.
"""
def a():
useglobals(rw_globals='x', r_globals='y,b')
# This function can read or write 'x',
# Can read 'y', and function 'b',
# and can access all builtins.
global x
y = 5
x += y
x = b(x)
return x
def b(g):
useglobals('','y,c','int')
# This function can only read 'y' and
# function 'c' in globals, and can
# only access 'int' in builtins.
g = g+y
c(int(g))
return g
def c(w):
useglobals(builtins=None)
# This function has no builtins or globals.
w = w**2
return w
y = 4
x = 5
z = 6
print a(),x,y,z
#---end---
--
http://mail.python.org/mailman/listinfo/python-list
Re: Little Q: how to print a variable's name, not its value?
On 30 Mar 2005 08:43:17 GMT, Duncan Booth <[EMAIL PROTECTED]> wrote: >Here is a rough attempt at printing the names of a variable. It will pick >up several names where appropriate, but deliberately doesn't attempt to >get all possible names (as you say, that could result in endless loops). >In particular, for the Fred=5/John=8/Winner=8 example it will only find >one of John or Winner since it only picks at most one match from each dict >or list. It doesn't yet manage to correctly lookup attributes (e.g. slots) >when they aren't stored in a __dict__, nor does the output distinguish >between dictionary keys and values (so encodings.cp437.encoding_map[8] >below actually refers to the key not the value). Here's what I've been working on. It still has some glitches in it but I think it has potential as a instructional/diagnostict tool I'm going to repost this with the source as it's own topic, maybe it can be developed further. :) Cheers, Ron IDLE 1.1.1c1 No Subprocess >>> from pnames import pnames >>> pnames() [globals] __builtins__ --> __doc__ --> __file__ --> C:\Python24\Lib\idlelib\idle.pyw __name__ --> __main__ idlelib --> pnames --> Paused >>> John = 8 >>> Fred = 6 >>> Winner = John >>> players = [John, Fred] >>> pnames() [globals] __builtins__ --> __doc__ --> __file__ --> C:\Python24\Lib\idlelib\idle.pyw __name__ --> __main__ Fred --> 6 idlelib --> John --> 8 players --> [8, 6] pnames --> Winner --> 8 Paused Both winner and John point to the litteral '8', but since the number '8' can never be changed, it doesn't matter, but you can look up the number '8' and find both John and Winner, but changing one, doesn't change the other. >>> John = 9 >>> pnames() [globals] __builtins__ --> __doc__ --> __file__ --> C:\Python24\Lib\idlelib\idle.pyw __name__ --> __main__ Fred --> 6 idlelib --> John --> 9 players --> [8, 6] pnames --> Winner --> 8 Paused Winner didn't change it's value. >>> scores = players >>> pnames() [globals] __builtins__ --> __doc__ --> __file__ --> C:\Python24\Lib\idlelib\idle.pyw __name__ --> __main__ Fred --> 6 idlelib --> John --> 9 players --> [8, 6] <-- scores pnames --> scores --> [8, 6] <-- players Winner --> 8 Paused >>> Here, players and scores are both mutable, changeing them will change the other and so it shows that the list [8,6] has more than one name. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Printing Varable Names Tool.. Feedback requested.
Hi, Sometimes it just helps to see what's going on, so I've been
trying to write a tool to examine what names are pointing to what
objects in the current scope.
This still has some glitches, like not working in winpython or the
command line, I get a 'stack not deep enough' error. I haven't tested
it on linux yet either. It could be something with my python
install.(?)
Anyway, here's the source with an example output. I'm still not sure
what to call it. Pnames stands for printnames, but if you think of
something better let me know.
Some things I think will be good is shortening long output such as
lists and dictionaries to '[...]' or '[:] or a few chars at the
beginning and end with '...' in the middle. I think it's important to
keep the output short and readable.
Any suggestions would be apreciated. :)
Ron
IDLE 1.1.1c1 No Subprocess
>>>
[globals]
__builtins__ -->
__doc__ -->
__file__ --> C:\Python24\Lib\idlelib\idle.pyw
__main__ -->
__name__ --> __main__
g --> 45
glist --> ['a changed', 'list'] <-- test2:a
h --> 45
idlelib -->
pnames -->
sys -->
test1 -->
test2 -->
test2.at1 --> [True] <-- test2:aaa
[test1]
t --> ['altered'] <-- test2:w
[locals:test2]
a --> ['a changed', 'list'] <-- globals:glist
aaa --> [True] <-- globals:test2.at1
w --> ['altered'] <-- test1:t
Paused
>>>
#---start---
# pnames.py
"""
A utility to print the value of variables for
debugging purposes.
To use: import prnames and insert prnames.prnames()
in the program where you want to examin names.
Pressing any key will continue.
Author: Ronald Adam
"""
import sys
if __name__ is not '__main__':
import __main__
def pnames():
"""
View
"""
objdict = {}# Collect references to object.
namelist = [] # List of names and values by frame.
n = 1
name = None
while name!= 'runcode':
# Move up in the frames to gather name data.
# until the application global frame is reached.
name = sys._getframe(n).f_code.co_name
if name != 'runcode': # This keeps it out of idle's name
space.
namelst = [name,[]]
namespace = sys._getframe(n).f_locals
keys = namespace.keys()
# Get all the names in this frame.
for k in keys:
try:
trash = objdict[id(namespace[k])][name]
except:
try:
objdict[id(namespace[k])][name]=[k]
except:
objdict[id(namespace[k])]={name:[k]}
else:
objdict[id(namespace[k])][name].append(k)
namelst[1].append((k,namespace[k],id(namespace[k])))
#namelist.append(namelst)
#try:
attribs = None
try:
attribs = namespace[k].func_dict
except:
pass
if attribs:
for att in attribs:
attname = k+'.'+att
try:
trash = objdict[id(attribs[att])][attname]
except:
try:
objdict[id(attribs[att])][name]=[attname]
except:
objdict[id(attribs[att])]={name:[attname]}
else:
objdict[id(attribs[att])][name].append(attname)
namelst[1].append((attname,attribs[att],id(attribs[att])))
namelist.append(namelst)
n += 1
# Now print what we collected.
namelist.reverse() # Reverse it so we have globals at the
top.
tab = 0
for gname, group in namelist:
# Sort it.
def sortnocase(stringlist):
tupleList = [(x[0].lower(), x) for x in stringlist]
tupleList.sort()
return [x[1] for x in tupleList]
group = sortnocase(group)
if gname == chr(63):# Idle uses this as name as
app-globals.
gname = 'globals'
if gname == namelist[-1:][0][0]:
gname = 'locals:'+gname # Indicate locals group.
print '%s[%s]'%(''*tab, gname)
tab += 1
for name, obj, objid in group:
# Print the varable name
print ''*tab,name,'-->',
# Remove & replace a lot of clutter as we print it.
# List other names pointing to mutable objects.
if name == '__doc__':
obj = ''
if 'module' in str(obj):# These remove clutter
obj = ''# More probably needs to be
if 'function' in str(obj): # done here.
obj = ''
# Print the object
print obj,
# Print the other names pointing to
# the
Re: Printing Varable Names Tool.. Feedback requested.
On Thu, 31 Mar 2005 18:37:53 GMT, Ron_Adam <[EMAIL PROTECTED]> wrote: > >Hi, Sometimes it just helps to see what's going on, so I've been >trying to write a tool to examine what names are pointing to what >objects in the current scope. > >This still has some glitches, like not working in winpython or the >command line, I get a 'stack not deep enough' error. I haven't tested >it on linux yet either. It could be something with my python >install.(?) > Here's the error I'm getting from the python command window. C:\ron\rondev\pythonmods>python Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pnames >>> pnames.pnames() Traceback (most recent call last): File "", line 1, in ? File "pnames.py", line 28, in pnames name = sys._getframe(n).f_code.co_name ValueError: call stack is not deep enough >>> -- http://mail.python.org/mailman/listinfo/python-list
Re: Printing Varable Names Tool.. Feedback requested.
Fixed it so it now runs from the command line and from winpython as
well as idle in Python 2.4 on Windows Xp. I still don't know about
linux systems.
I decided on viewnames.py as the filename and viewit() as the calling
name.
#---start---
# viewnames.py
"""
A utility to print the value of variables for
debugging and instructional purposes.
To use:
from viewnames import viewit
viewit()
Insert it in the program where you want to examine names.
It will pause after it prints so that it will work in loops.
Author: Ronald Adam
"""
import sys
if __name__ is not '__main__':
import __main__
def readnames():
"""
Build a sequential name list and a dictionary of
objects and name references by groups.
"""
objdict = {}# Collect references to object.
namelist = [] # List of names and values by frame.
n = 2
name = None
while name not in ['runcode', 'RunScript', chr(63)]:
# Move up in the frames to gather name data.
# until the application global frame is reached.
name = sys._getframe(n).f_code.co_name
#print name
namelst = [name,[]]
namespace = sys._getframe(n).f_locals
keys = namespace.keys()
# Get all the names in this frame.
for k in keys:
try:
trash = objdict[id(namespace[k])][name]
except:
try:
objdict[id(namespace[k])][name]=[k]
except:
objdict[id(namespace[k])]={name:[k]}
else:
objdict[id(namespace[k])][name].append(k)
namelst[1].append((k,namespace[k],id(namespace[k])))
# Read any attributes if there is a dictionary.
attribs = None
try:
attribs = namespace[k].func_dict
except:
pass
if attribs:
for att in attribs:
attname = k+'.'+att
try:
trash = objdict[id(attribs[att])][attname]
except:
try:
objdict[id(attribs[att])][name]=[attname]
except:
objdict[id(attribs[att])]={name:[attname]}
else:
objdict[id(attribs[att])][name].append(attname)
namelst[1].append((attname,attribs[att],id(attribs[att])))
namelist.append(namelst)
n += 1
return objdict, namelist
def sortnocase(stringlist):
tupleList = [(x[0].lower(), x) for x in stringlist]
tupleList.sort()
return [x[1] for x in tupleList]
def printnames( objdict, namelist):
"""
Now print what we collected.
"""
namelist.reverse() # Reverse it so we have globals at the
top.
tab = 0
for gname, group in namelist:
# Print group name.
if gname == chr(63):# Idle uses this as name as
app-globals.
gname = 'globals'
if gname == namelist[-1:][0][0]:
gname = 'locals:'+gname # Indicate locals group.
print '%s[%s]'%(''*tab, gname)
tab += 1
# Print group items.
# Sort group fist.
group = sortnocase(group)
for name, obj, objid in group:
# Print the varable name
print ''*tab,name,'-->',
# Print object
# Remove & replace a lot of clutter as we print it.
# List other names pointing to mutable objects.
obj2 = obj # Make a copy to edit if needed.
if name == '__doc__':
if len(str(obj))>30:
obj2 = obj2.strip()
obj2 = '"'+obj2[:6]+' ... '+obj2[-6:]+'"'
#obj = ''
if 'module' in str(obj):# These remove clutter
obj2 = ''# More probably needs to be
if 'function' in str(obj): # done here.
obj2 = ''
# Print the object
print obj2,
# Print the other names pointing to
# the object.
endofline = ''
# If object is immutable, don't print references.
if type(obj) not in [int, str, float, bool, tuple] \
and obj not in [True, False, None]:
for key in objdict[objid].keys():
grp = key
if key == chr(63):
grp = 'globals'
namegroup = objdict[objid][key]
n = 0
for nm in namegroup:
if nm != name:
if n>0:
endofline += ', '
if grp != gname:
endofline += grp+':'+nm
else:
endoflin
Re: Printing Varable Names Tool.. Feedback requested.
On Thu, 31 Mar 2005 19:13:39 GMT, Ron_Adam <[EMAIL PROTECTED]> wrote: >On Thu, 31 Mar 2005 18:37:53 GMT, Ron_Adam <[EMAIL PROTECTED]> >wrote: > >> >>Hi, Sometimes it just helps to see what's going on, so I've been >>trying to write a tool to examine what names are pointing to what >>objects in the current scope. >> >>This still has some glitches, like not working in winpython or the >>command line, I get a 'stack not deep enough' error. I haven't tested >>it on linux yet either. It could be something with my python >>install.(?) >> > > >Here's the error I'm getting from the python command window. > > > >C:\ron\rondev\pythonmods>python >Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] >on win32 >Type "help", "copyright", "credits" or "license" for more information. >>>> import pnames >>>> pnames.pnames() >Traceback (most recent call last): > File "", line 1, in ? > File "pnames.py", line 28, in pnames >name = sys._getframe(n).f_code.co_name >ValueError: call stack is not deep enough >>>> I fixed it, it now works in winpython and from the command line. If anyones interested. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: New to programming question
On 31 Mar 2005 20:03:00 -0800, "Ben" <[EMAIL PROTECTED]> wrote: >Could someone tell me what is wrong and give me a better alternative to >what I came up with. Seperate you raw input statements from your test. Your elsif is skipping over it. Try using only one raw imput statement right after your while statement. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: dictionary: sorting the values preserving the order
On 31 Mar 2005 22:40:53 -0800, "Rakesh" <[EMAIL PROTECTED]>
wrote:
>Hi,
> For a particular problem of mine, I want to sort pairs
>by its value.
>
>Eg:
>
>Input:
>
>A, 4
>B, 5
>C, 1
>D, 2
>E, 3
>
>I would like the output to be:
>
>C
>D
>E
>A
>B
>
>i.e. I would like to get the keys in the sorted order of values.
Generally, dictionaries nearly always have two parts. The dictionary
itself, and a separate list of keys to access it with.
To access the dictionary in a particular order, you just need a sorted
key list.
Since what you want is to access by value, you need to create a second
dictionary with the values as the keys. That will only work if the
values never repeat. If they do, then you need to use a list and not a
dictionary.
This creates a second dictionary with a sorted value key list.
alpha_dict = {'A':4, 'B':5, 'C':1, 'D':2, 'E':3}
# Create a new dictionary with keys and values exchanged.
num_dict = {}
for k in alpha_dict.keys():
num_dict[ alpha_dict[k] ] = k
# Get the num_dict keys and sort them.
num_keys = num_dict.keys()
num_keys.sort()
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Case-insensitive dict, non-destructive, fast, anyone?
On 01 Apr 2005 15:55:58 +0300, Ville Vainio <[EMAIL PROTECTED]>
wrote:
>> "Daniel" == Daniel Dittmar <[EMAIL PROTECTED]> writes:
>
>Daniel> Ville Vainio wrote:
>
>>> I need a dict (well, it would be optimal anyway) class that
>>> stores the keys as strings without coercing the case to upper
>>> or lower, but still provides fast lookup (i.e. uses hash
>>> table).
>
>Daniel> Store the original key together with the value and use a
>Daniel> lowercase key for lookup.
>
>That's what I thought initially, but the strings take most of the
>space in dict and I didn't feel like doubling the size.
>
>It would be the "simplest thing that could possibly work", though.
Try access the keys indirectly though another dictionary. That way
you don't have to change the original.
Lkeys = {}
for k dict.keys():
Lkeys[ k.lower] = dict[k]
Then use:
value = dict[ Lkeys[ key.lower() ] ]
To get your value from the original dictionary.
Watch out for duplicate keys in differing case in the original dict.
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Ternary Operator in Python
On Fri, 1 Apr 2005 08:24:42 +0100 (BST), praba kar <[EMAIL PROTECTED]> wrote: >Dear All, >I am new to Python. I want to know how to >work with ternary operator in Python. I cannot >find any ternary operator in Python. So Kindly >clear my doubt regarding this > > > >__ >Yahoo! Messenger >Show us what our next emoticon should look like. Join the fun. >http://www.advision.webevents.yahoo.com/emoticontest I've used boolean opperations to do it. result = (v == value) * first + (v != value) * second Same as: if v == value: result = first else: result = second Ron -- http://mail.python.org/mailman/listinfo/python-list
Decorater inside a function? Is there a way?
I'm trying to figure out how to test function arguments by adding a decorator. @decorate def func( x): # do something return x This allows me to wrap and replace the arguments with my own, but not get the arguments that the original function received. To do that I would need to put the decorator inside the function. def func( x): @decorate # doc something return x Then I could use @decorators to check the function input for condition, ranges, and or types. Is there a equivalent way to do that? Also can I use @decorate with assert? Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorater inside a function? Is there a way?
On Fri, 01 Apr 2005 13:47:06 -0500, Jeremy Bowers <[EMAIL PROTECTED]> wrote: >On Fri, 01 Apr 2005 18:30:56 +, Ron_Adam wrote: >> I'm trying to figure out how to test function arguments by adding a >> decorator. > >The rest of your message then goes on to vividly demonstrate why >decorators make for a poor test technique. So it's not possible to do. Ok, thanks. >Is this an April Fools gag? If so, it's not a very good one as it's quite >in line with the sort of question I've seen many times before. "I have >a hammer, how do I use it to inflate my tire?" Not an April fools gag, I'm just new to decorators and google brings up lots of discussions from the past on how they may be implemented in the future, but not much in actually how they work or how to use them. They don't seem to be documented well at the present, possibly because the syntax and or function of them isn't completely decided on. I've been able to figure out the basic principle from the examples I've found, but that doesn't mean there isn't more possibilities I haven't found yet. >Assuming you're serious, why not use one of the many testing technologies >actually designed for it, and tap into the associated body of knowledge on >how to accomplish various tasks? Start with what you're trying to do, then >work on how to do it. I'm trying to understand the use's, limits, and possibilities of decorators. It just occurred to me that wrapping the contents of a function vs wrapping the function it's self, could be useful. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Pseudocode in the wikipedia
On Fri, 1 Apr 2005 12:15:35 -0800, James Stroud <[EMAIL PROTECTED]>
wrote:
>Is anybody else bothered by those stupid pascal-like ":=" assignment
>operators?
>
>Maybe, for the sake of adding more variety to the world, wiki should come up
>with a new assignment operator, like "==". I like that one because then it
>could really be original:
>
>if (bob = 4):
> bob == bob + 2
To me ":=" could mean to create a copy of an object... or should it
be "=:" ?
Or how about ":=)" to mean is equal and ":=(" to mean it's not.
Then there is ";=)", to indicate 'True', and ':=O' to indicate 'False'
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorater inside a function? Is there a way?
On Fri, 01 Apr 2005 16:46:14 -0500, Jeremy Bowers <[EMAIL PROTECTED]>
wrote:
>On Fri, 01 Apr 2005 19:56:55 +, Ron_Adam wrote:
>
>> On Fri, 01 Apr 2005 13:47:06 -0500, Jeremy Bowers <[EMAIL PROTECTED]>
>> wrote:
>>>Is this an April Fools gag? If so, it's not a very good one as it's quite
>>>in line with the sort of question I've seen many times before. "I have
>>>a hammer, how do I use it to inflate my tire?"
>>
>> Not an April fools gag, I'm just new to decorators and google brings
>> up lots of discussions from the past on how they may be implemented in
>> the future, but not much in actually how they work or how to use them.
>
>OK, just checking :-)
>
>A decorator is completely equivalent in principle to
>
>def function():
> pass
>function = decorator(function)
This helped some. Thanks.
>That's a simplified form; decorators can themselves be an expression which
>returns a callable that can be applied to a function and the rule for
>applying several in sequence work as you'd expect (pipelining earlier
>results into later ones, making for a great Obfuscated Python entry or
>two based on the "function name misdirection" trick), but this simplified
>form captures the essense, which is what I think you're looking for. In
>particular, it's just "syntax sugar", not a "special feature".
Are you sure? There appears to be some magic involved with these,
things happening under the hood with argument passing.
def decorate(function):
def wrapper(args):
print 'args' = args
return function(args)
return wrapper
@decorate
def func(s):
print s
func('hello')
In this example, how does wrapper get the correct arguments? This
leads me to believe what I'm looking for is possible, yet in this case
there isn't any way to pass, new arguments to the wrapper without
loosing the original ones.
Wait a min, hold the phone.. Eureka! :) I just figured how to do it.
(after trying it in idle)
def append_arg(n_args):
def get_function(function):
def wrapper(args):
return function(args+'-'+n_args)
return wrapper
return get_function
@append_arg('goodbye')
def func(s):
print s
func('hello')
prints:
hello-goodbye
Ok, this isn't a very useful example, but it demonstrates something
important. That, there seems to be a stack involved in the argument
passing of nested defined functions. Any arguments passed in the
decorators get puts on top of the stack. And nested functions pull
them back off. Does this sound right?
I still feel it can be simplified a bit. These aren't easy to
understand, and having to nest functions like this adds to the
confusion. possibly being able to get the argument "stack", as it
appears to be, directly in the first level could make things a lot
easier.
>
>Feeling-like-I-owed-you-an-answer-after-the-april-fool-accusation-ly yrs,
>Jeremy Bowers
>:-)
Thanks, it helped. :)
--
http://mail.python.org/mailman/listinfo/python-list
Re: Lambda: the Ultimate Design Flaw
On Sat, 02 Apr 2005 00:40:15 -0500, Steve Holden <[EMAIL PROTECTED]> wrote: > >The danger in GOTO is that it allows the undisciplined programmer to >develop a badly-structured solution to a programming problem. A >disciplined programmer will write well-structured code with whatever >tools come to hand. > >regards > Steve And how that becomes really clear when you want to modify a large program that uses GOTOs librally. Ron -- http://mail.python.org/mailman/listinfo/python-list
Docorator Disected
I was having some difficulty figuring out just what was going on with
decorators. So after a considerable amount of experimenting I was
able to take one apart in a way. It required me to take a closer look
at function def's and call's, which is something I tend to take for
granted.
I'm not sure this is 100%, or if there are other ways to view it, but
it seems to make sense when viewed this way.
Is there a way to do this same thing in a more direct way? Like
taking values off the function stack directly. How much of it get's
optimized out by the compiler?
#
# Follow the numbers starting with zero.
#
# (0) Read defined functions into memory
def decorator(d_arg): # (7) Get 'Goodbye' off stack
def get_function(function): # (8) Get func object off stack
def wrapper(f_arg):# (9) Get 'Hello' off stack
new_arg = f_arg+'-'+d_arg
result = function(new_arg) # (10) Put new_arg on stack
# (11) Call func object
return result # (14) Return result to wrapper
return wrapper# (15) Return result to get_function
return get_function# (16) Return result to caller of func
@decorator('Goodbye') # (5) Put 'Goodbye' on stack
# (6) Do decorator
def func(s):# (12) Get new_arg off stack
return s# (13) Return s to result
# (1) Done Reading definitions
print func('Hello') # (2) Put 'Hello' on stack
# (3) Put func object on stack
# (4) Do @decorator
# (17) print 'Hello-Goodbye'
# Hello-Goodbye
--
http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On 2 Apr 2005 07:22:39 -0800, "El Pitonero" <[EMAIL PROTECTED]>
wrote:
>Is it possible that you mistakenly believe your @decorator() is being
>executed at the line "func('Hello')"?
>
>Please add a print statement to your code:
>
>def decorator(d_arg):
> def get_function(function):
> print 'decorator invoked'
> def wrapper(f_arg):
> new_arg = f_arg+'-'+d_arg
> result = function(new_arg)
> return result
> return wrapper
> return get_function
Thanks, you are correct. I'll post a revised dissection with print
statements documenting the flow in a few minutes. I'm still a bit
fuzzy on how the arguments are stored and passed.
Regards,
Ron_Adam
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Dissection
On Sat, 02 Apr 2005 19:59:30 +0200, "Diez B. Roggisch"
<[EMAIL PROTECTED]> wrote:
>> statements documenting the flow in a few minutes. I'm still a bit
>> fuzzy on how the arguments are stored and passed.
>
>The arguments are part of the outer scope of the function returned, and thus
>they ar kept around. That's standart python,too:
>
>def foo():
>a = 10
>def bar():
> return a*a
>return bar
>
>print foo()()
>
>
>No decorator-specific magic here - just references kept to outer frames
>which form the scope for the inner function.
I followed that part. The part that I'm having problems with is the
first nested function get's the argument for the function name without
a previous reference to the argument name in the outer frames. So, a
function call to it is being made with the function name as the
argument, and that isn't visable so it looks as if it's magic.
Ok, Since I was using the wrong model the first time, probably due to
not sleeping well and mixing past language experience in improperly,
we will try again.
In the below model, the @decorator, (object or the interpreter
executing the @decorator statement?), calls nested functions in the
function of the same name until it reaches the inner loop which is
then attached to the function name. Is this correct now?
Cheers,
Ron
### Decorator Dissection V.2 ###
print "\n(0) Start reading decorator defs"
def decorator(d_arg):
print "(3) decorator: gets '"+d_arg+"'"
def get_function(function):
print "(6) get_function: gets 'func' object"
def wrapper(f_arg):
print "(10) wrapper: gets '"+f_arg+"'"
new_arg = f_arg+'-'+d_arg
print "(11) wrapper: calls func('"+new_arg+"')"
result = function(new_arg)
print "(13) wrapper: returns '"+result+"'"
return result
print "(7) get_function: returns 'wrapper' object"
return wrapper
w = get_function
print "(4) decorator: return 'get_function' object"
print '(5) @decorator: calls get_function(func)'
# Need to print this here, done at *(5)
return w
print "(1) Done reading decorator defs\n"
print "(2) @decorator: calls decorator('goodbye')"
# *(5) @decorator: call get_funtion(func)
@decorator('Goodbye')
def func(s):
print '(12) func returns:', s
return s
print "(8) @decorator: func = wrapper\n"
print "(9) Call func('Hello') which is now wrapper object:"
result = func('Hello')
print "(14) result gets '"+result+"'\n"
print result
#---output---
(0) Start reading decorator defs
(1) Done reading decorator defs
(2) @decorator: calls decorator('Goodbye')
(3) decorator: gets 'Goodbye'
(4) decorator: return 'get_function' object
(5) @decorator: calls get_function(func)
(6) get_function: gets 'func' object
(7) get_function: returns 'wrapper' object
(8) @decorator: func = wrapper
(9) Call func('Hello') which is now wrapper object:
(10) wrapper: gets 'Hello'
(11) wrapper: calls func('Hello-Goodbye')
(12) func returns: Hello-Goodbye
(13) wrapper: returns 'Hello-Goodbye'
(14) result gets 'Hello-Goodbye'
Hello-Goodbye
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Dissection
On 2 Apr 2005 08:39:35 -0800, "Kay Schluehr" <[EMAIL PROTECTED]>
wrote:
>
>There is actually nothing mysterious about decorators.
I've heard this quite a few times now, but *is* quite mysterious if
you are not already familiar with how they work. Or instead of
mysterious, you could say complex, as they can be used in quite
complex ways.
What is missing most with them is some really good documentation. I
got the basic idea and syntax of decorators down right away, but ran
into problems implementing them because, the structure of the
functions being used for the decorators wasn't clear.
>It is nothing
>more than ordinary function composition, executed when the decorated
>function is defined. In case of Your definition it, the composition
>rules are:
>
>decorator("Goodbye")(func)(s) = get_function(func)(s) = wrapper(s),
>where wrapper stores "Goodbye" in the local d_arg.
It worked as a model, but I mixed in concepts from cstacks and
function calls, which apparently isn't correct. I posted another
model, it should be a bit closer. (with the Subject line spelled
correctly, continue this thread there. ;)
>Or a bit more formally we state the composition principle:
>
>Args x Func -> Func, where decorator() is a function of Args, that
>returns a function Func -> Func. As Guido had shown recently in his
>Artima blog, Func need not be an instance of an ordinary function but
>can be a function-object like his MultiMethod :
>
>http://www.artima.com/weblogs/viewpost.jsp?thread=101605
I read this, this morning it was very interesting.
>It is also possible to extend this view by "chaining" decorators.
>
>decorator : Args(2) x (Args(1) x Func - > Func ) -> Func.
>
>To understand decorator chains it is very helpfull to accept the
>functional view instead of arguing in a procedural picture i.e. pushing
>and popping arguments onto and from the stack.
Understanding chains is next on my list. :)
>Someone asked once for a solution of the following problem that is
>similar in character to Guidos multimethod but some more general.
>
>def mul(m1,m2):
>def default(m1,m2):
>return "default",1+m1*m2
>def mul_dec(m1,m2):
>return "mul_dec",Decimal(str(m1))*Decimal(str(m2))
>def mul_float(m1,m2):
>return "mul_float",m1*m2
>return (default,mul_dec,mul_float)
>
>The function mul defines the inner functions default, mul_float and
>mul_dec. What we want is a unified access to this functions by means of
>mul. Guidos solution would decompose mul in three different versions of
>mul:
This is similar to c++'s polymorphism which I've played with nearly 10
years ago. I generally found it useful only in small doses even then.
I seem to think now that c++'s version of it was implemented at
compile time, with each function call being matched up with the
correct function, by the argument types. Where as Guido's version, is
dynamic and handles the situation at run time. I may not be correct
in this, it's been a while.
>@multimethod(int,float)
>def mul(m1,m2):
>return m1*m2
>
>@multimethod(float,float)
>def mul(m1,m2):
>return m1*m2
>
>
>@multimethod(Decimal,Decimal)
>def mul(m1,m2):
>return m1*m2
>
>but it is hard to tell, what should be done if no argument tuple
>matches.
It could then invoke the adapt() function to determine if a possible
single way to continue is available. But with that you could run into
some very subtle bugs. Or just annoying windows like behavior, such
as a word processor auto correcting a word when don't want it to.
>An attempt like:
>
>@multimethod(object,object)
>def mul(m1,m2):
>return 1+m1*m2
>
>would be useless, because there is no concrete match of argument types
>onto (object,object).
>
>So I introduced an "external switch" over argument tuples, using a
>decorator chain:
>
>@case(None,"default")
>@case((float,float),'mul_float')
>@case((int,float),'mul_float')
>@case((Decimal,Decimal),'mul_dec')
>
>def mul(m1,m2):
>def default(m1,m2):
>return "default",1+m1*m2
>def mul_dec(m1,m2):
>return "mul_dec",Decimal(str(m1))*Decimal(str(m2))
>def mul_float(m1,m2):
>return "mul_float",m1*m2
>return (default,mul_dec,mul_float)
>
>Can You imagine how "case" works internally?
>
>Regards,
>Kay
Sure, That should be fairly straight forward. Although I can imagine
several ways of implementing it at the moment. I think after I play
with decorator chains, one way will probably stand out as being
cleaner than the others.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Dissection
On Sat, 02 Apr 2005 21:04:57 +0200, "Diez B. Roggisch"
<[EMAIL PROTECTED]> wrote:
>> I followed that part. The part that I'm having problems with is the
>> first nested function get's the argument for the function name without
>> a previous reference to the argument name in the outer frames. So, a
>> function call to it is being made with the function name as the
>> argument, and that isn't visable so it looks as if it's magic.
>
>No, its not - but I stepped into that trap before - and thought its magic :)
It's magic until we understand it. ;)
I get the feeling that those who have gotten to know decorators find
them easy, and those who haven't, find them nearly impossible to
understand. Which means there is a fairly large first few steps to
get over, then it gets easy. There *is* some underlying processes
being made, which is also the reason that makes them attractive. Less
type/declare/organize/etc... but that is also what causes the
difficulty in understanding and using them at first.
>The trick is to know that
>
> - a decorator is a callable
> - get passed a callable
> - has to return a callable
>
>So this is the simplest decorator imaginable is:
>
>def identity(f):
>return f
>
>And the decorator _syntax_ is just a python expression that has to be
>_evaluated_ to a yield decorator. So
>
>@identity
>def foo(self):
>pass
This much I understand.
>the @identity is just the expression evaluated - to the function reference
>to identity, which is callable and follows the decorator protocol - and the
>_result_ of that evaluation is called with the callable in question.
This tells me what it is, and what it does, but not how it works. How
is the ***expression evaluated***, what is the ***decorator
protocol***.
Those are the parts I'm trying to understand at this point. I know
this is the equivalent of looking behind the curtains to reveal the
little man who is the wizard. But I can't resist. :)
>So if you want to have _parametrized_ decorators, that expression is
>_evaluated_ and has to yield a decorator. Like this:
>
There's that word again... **evaluated**. How?
>def arg_decorator(arg):
>def real_decorator(f):
>return f
>return real_decorator
>
>So, this works
>
>@arg_decorator('fooobar')
>def foo(self):
>pass
>
>@arg_decorator('fooobar') is evaluated to real_decorator (which a scope
>containing arg), and _that_ gets called with foo.
>
So if I'm following you right?
When the interpreter gets to the line @arge_decorator('fooobar')
it does the following?
foo = arg_decorator('fooobar')(foo)() #?
(experiment with idle a bit...)
Ok I got it. :)
I wasn't aware that the form:
result = function(args)(args)
Was a legal python statement.
So python has a built in mechanism for passing multiple argument sets
to nested defined functions! (click) Which means this is a decorator
without the decorator syntax.
def arg_decorator(arg1):
def real_decorator(function):
def wrapper(arg2)
return f(arg2)
return real_decorator
def foo(arg2):
pass
foo = arg_decorator('fooobar')(foo)(2arg)
The apparent magic is the silent passing of the second two arguments.
So this isn't a decorator question any more. Each argument gets
passed to the next inner defined function, via... a stack(?) ;)
Somehow I think I've completed a circle. LOL
Cheers,
Ron
>HTH - bit me the first time too :)
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Dissection
On Sat, 02 Apr 2005 18:39:41 GMT, Ron_Adam <[EMAIL PROTECTED]> wrote: >>def foo(): >>a = 10 >>def bar(): >> return a*a >>return bar >> >>print foo()() <--- *Here* >> >> >>No decorator-specific magic here - just references kept to outer frames >>which form the scope for the inner function. Thanks Kay, I wasn't aware of pythons ability to pass arguments to nested functions in this way. I missed it the first time. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Sat, 02 Apr 2005 21:28:36 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote: >I think it might help you to start out with very plain decorators rather than >decorators as factory functions that return decorator functions that wrap the >decorated function in a wrapper function. E.g., (this could obviously be >parameterized as a single decorator factory, but I wanted to show the simplest >level >of decorator functionality) Thanks for the examples of stacked decorators! :-) I think I pretty much got it now, I had never needed to pass arguments to nested "defined" functions before and none of the documentation I have, ever mentioned that alternative. So I didn't know I could do this: def foo(a1): def fee(a2): return a1+a2 return fee fum = foo(2)(6) <-- !!! # fum is 8 The interesting thing about this is the 'return fee' statement gets the (6) apparently appended to it. So it becomes 'return fee(6). That subtle action is confusing if you don't already know about it, which I didn't. In this example. def foo(a1): def fee(a2): return a1+a2 return fee fum = foo(2) There is no second set of arguments to append to 'return fee', so the name fum is pointed to object fee instead and fee is not evaluated. This second subtle action, is also confusing if you aren't aware of it. Since the two look the same when you examine the def statements. So there is no reason to think they would not act the same, both returning an function object. Now, add in the @decorator syntax to the mix. Which hides the extra argument sets that are passed to the nested defined functions and the obscuration is complete. There then is no visual indication of where the function calls get their arguments from, and this is what I believe caused me to have so much trouble with this. Another inconsistency, although not a bad one, is that nested 'defined' function share scope, but nested function calls do not. Now what this means, is it will be very difficult for some people to put it all together. I would have gotten it sooner or later, but I really happy to have help from comp.lang.python. on this one. :) >I like annotated code walk-throughs. But as others have pointed out, >it's still a bit buggy ;-) It helped a lot, but notice that it took me several tries. That's a strong indicator that decorators are more implicit than explicit and that goes against the "Explicit is better than Implicit" guideline that python tries to follow. Maybe there are ways to make decorators -and- nested function calls a bit more explicit? I think a having indicators on the return statements that are meant to return a value vs object would help readability and take some of the mystery out as far as the un initiated are concerned. def foo(a1): def fee(a2): def fiddle(a3): pass return a3 return fee # Always return a function object. # Error, if argument is passed to it. # and return fee(a2) # always require an argument, # error if none is passed to it. Or some other way if this breaks something. But it will make it more apparent what nested function should do. And give clearer feed back when trying to use or write decorators. I'm not sure what might make @decorator more explicit. Maybe allowing all the function to be specified as an option. Maybe it is already(?) @decorator(a1)(foo) def foo(): pass So we will have: def foo(a1): def fee(a2): def fiddle(a3): pass return a3 return fee # Object always returned here or # or error if argument is received. @decorator(a1)(fum) # Last argument optional. def fum(a3): return a3 These I think are small changes that might be acceptable. A little more aggressive alterations would be: Requiring the 'function' argument may have a use when using stacked decorators. Then it could be inserted into a sequence? @deco3(postcalc) @deco2(fum) @deco1(precalc) def fum(pointxyz): return translatepoint(pointxyz) ... and that reversed order... (yuck!), is it really necessary? Readability is important, and it is a big reason people don't jump ship for some other language. Why the exceptions here? Ok, don't mean to grip. :-) I'm sure there's been plenty of that in past discussions. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorater inside a function? Is there a way?
On 2 Apr 2005 10:23:53 -0800, "George Sakkis" <[EMAIL PROTECTED]> wrote: >It turns out it's not a "how to inflate tires with a hammer" request; >I've actually written an optional type checking module using >decorators. The implementation details are not easy to grok, but the >usage is straightforward: > >from typecheck import * >@returns(listOf(int, size=3)) >@expects(x=str, y=containerOf(int)) >def foo(x,y): >return [len(x)] + list(y) > Hi George, I wrote one like that too yesterday once I figured out how to pass the arguments, except not with 'conainerof' emplemented. That's a nice touch. >>@define((int, int, float), (int, list)) >>def somefunction( a_int, b_int, c_float): # some stuff return an_int, an_list It checks both the inputs and returns for types and number of items. And gives error appropriate for both. Next, would be to surround the 'def define' statements with an "if __debug__: " statement so it can be turned off for the final version. I wonder if a decorator, that just passes values straight though, gets optimized out or not? Or if there's another way to turn of a decorator? I also have a test_type function that can be put inline that tries to adapt the value before it gives an error. These are interesting problems to solve and go into my tool box, although I don't have a need for them at the moment. :) Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On 2 Apr 2005 20:02:47 -0800, "El Pitonero" <[EMAIL PROTECTED]>
wrote:
>Ron_Adam wrote:
>>
>> So I didn't know I could do this:
>>
>> def foo(a1):
>> def fee(a2):
>> return a1+a2
>> return fee
>>
>> fum = foo(2)(6) <-- !!!
>
>Ah, so you did not know functions are objects just like numbers,
>strings or dictionaries. I think you may have been influenced by other
>languages where there is a concept of static declaration of functions.
No, I did not know that you could pass multiple sets of arguments to
nested defined functions in that manner. Just haven't ran acrossed it
in the two years I've been playing around with python. I haven't had a
reason to try it either. But maybe now that I'm aware of it, I'll
find more uses for it.
>The last line can be better visualized as:
>
>fum = (foo(2)) (6)
>
>where foo(2) is a callable.
>
>---
>
>Since a function is an object, they can be assigned (rebound) to other
>names, pass as parameters to other functions, returned as a value
>inside another function, etc. E.g.:
>
>def g(x):
>return x+3
>
>h = g # <-- have you done this before? assignment of function
Sure, I have no problem with that. Been doing it for quite a while. :)
>print h(1) # prints 4
>
>def f(p):
>return p # <-- function as return value
>
>p = f(h) # <-- passing a function object
>
>print p(5) # prints 8
>
>Python's use of "def" keyword instead of the "=" assignment operator
>makes it less clear that functions are indeed objects. As I said
>before, this is something to think about for Python 3K (the future
>version of Python.)
I've always equated 'def' as if it were 'make', or in Python its just
a variation of 'class' for a subset of objects of type 'function'.
>
>
>Function modifiers exist in other languages. Java particularly is
>loaded with them.
>
>public static synchronized double random() {
>...
>}
>
>So your new syntax:
>
>@decorator(a1)(foo)
>def foo():
> pass
>
>is a bit out of the line with other languages.
So? Why would it need to be the same as other languages? I like
Python because it's not the same. :)
The above syntax suggestion, just matches the already existing
behavior,
Thanks for helping BTW, I think I have it down pretty good now.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorater inside a function? Is there a way?
On 3 Apr 2005 00:20:32 -0800, "George Sakkis" <[EMAIL PROTECTED]> wrote: >Yes, it is possible to turn off type checking at runtime; just add this >in the beginning of your define: > >def define(func): >if not ENABLE_TYPECHECKING: >return lambda func: func ># else decorate func > >where ENABLE_TYPECHECKING is a module level variable that can be >exposed to the module's clients. In my module, the default is >ENABLE_TYPECHECKING = __debug__. > > >George Cool, I'll try that. Thanks, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Sun, 03 Apr 2005 08:37:02 +0200, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >>>Ah, so you did not know functions are objects just like numbers, >>>strings or dictionaries. I think you may have been influenced by other >>>languages where there is a concept of static declaration of functions. >> >> >> No, I did not know that you could pass multiple sets of arguments to >> nested defined functions in that manner. > >Please read the statements carefully, and try to understand the mental >model behind them. He did not say that you can pass around multiple >sets of arguments. He said that functions (not function calls, but >the functions themselves) are objects just like numbers. There is >a way of "truly" understanding this notion, and I would encourage >you to try doing so. Hello Martin, It is interesting how sometimes what we already know, and a new situation presented in an indirect way, can lead us to viewing an isolated situation in a biased way. That's pretty much the situation I've experienced here with this one point. I already knew that functions are objects, and objects can be passed around. My mind just wasn't clicking on this particular set of conditions for some reason, probably because I was looking too closely at the problem. (Starting off as a tech, with knowledge of how microchips work, can sometimes be a obstacle when programming in high level languages.) I'm sure I'm not the only one who's had difficulties with this. But I'm somewhat disappointed in myself for not grasping the concept as it is, in this particular context, a bit sooner. Cheers, Ron >Regards, >Martin -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Sun, 03 Apr 2005 07:53:07 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote:
>>No, I did not know that you could pass multiple sets of arguments to
>That phraseology doesn't sound to me like your concept space is quite
>isomorphic
>with reality yet, sorry ;-)
You'll be happy to know, my conceptual conceptions are conclusively
isomorphic this morning. :-)
>It sounds like you are thinking of "multiple sets of arguments"
>as an aggregate that is passed as such, and that isn't happening, as I believe
>El Pitonero
>is trying to indicate with his parenthesized visualization below.
Well there are multiple sets of arguments, and there are multiple
functions involved. It's just a matter of how they get matched up.
Depending on what level you look at it, it could be both ways. But the
correct way to view it is in the context of the language it self, and
not the underlying byte code, c++ or assembly code.
>What is happening is that an expression "foo(2)(6)" is being evaluated left to
>right.
>First foo as a name evaluates to whatever it is bound to, which is the foo
>function.
>Then () is the calling operator, which says evaluate the list inside the
>parens left to right
>and call the thing you had so far, which was foo here. The arg list was just
>2, so foo is called
>with 2, and foo returns something, with which we will do the next operation if
>there is one.
Like this of course:
def foo(x):
def fee(y):
return y*x
return fee
statement: z = foo(2)(6)
becomes:z = fee(6)
becomes:z = 12
The position of the 'def fee' inside of 'def foo' isn't relevant, it's
only needed there so it can have access to foo's name space. It could
be at the top or bottom of the function it is in, and it wouldn't make
a difference.
This would be the same without the nesting:
def foo(xx):
global x
x = xx
return fee
def fee(y):
global x
return y*x
z = foo(2)(6)
>So if you are seeing (2)(6) as something to pass, as opposed to a sequence of
>operations, I think there's
>a misconception involved. Perhaps I am taking your words askew ;-)
It's not entirely a misconception. Lets see where this goes...
> >>> dis.dis(compiler.compile('foo(2)(6)','','eval'))
> 1 0 LOAD_NAME0 (foo)
> 3 LOAD_CONST 1 (2)
> 6 CALL_FUNCTION1
> 9 LOAD_CONST 2 (6)
> 12 CALL_FUNCTION1
> 15 RETURN_VALUE
In this example, you have byte code that was compiled from source
code, and then an interpreter running the byte code; which in it self,
is a program written in another language to execute the byte code,
C++; which gets translated into yet another language, assembly; which
at one time would have corresponded to specific hardwired registers
and circuits,(I could go further...ie... translators... PNP...
holes...), but with modern processors, it may yet get translated still
further.
While all of this isn't relevant, it's knowledge in my mind, and
effects my view of programming sometimes.
Now take a look at the following descriptions of the above byte codes
from http://docs.python.org/lib/bytecodes.html
LOAD_NAMEnamei
Pushes the value associated with "co_names[namei]" onto the stack.
LOAD_CONSTconsti
Pushes "co_consts[consti]" onto the stack.
CALL_FUNCTIONargc
Calls a function. The low byte of argc indicates the number of
positional parameters, the high byte the number of keyword parameters.
On the stack, the opcode finds the keyword parameters first. For each
keyword argument, the value is on top of the key. Below the keyword
parameters, the positional parameters are on the stack, with the
right-most parameter on top. Below the parameters, the function object
to call is on the stack.
RETURN_VALUE
Returns with TOS to the caller of the function.
*TOS = Top Of Stack.
The calling routine, puts (passes) the second set of arguments onto
the stack before calling the function returned on the stack by the
previous call.
Which is exactly how I viewed it when I referred to coming full circle
and the second sets of arguments are pass with a "stack(?)".
Or it could be said equally the functions (objects) are passed with
the stack. So both view are correct depending on the view point that
is chosen.
Cheers,
Ron
>HTH
>
>Regards,
>Bengt Richter
--
http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On 3 Apr 2005 00:11:22 -0800, "El Pitonero" <[EMAIL PROTECTED]> wrote: >Martin v. Löwis wrote: >Perhaps this will make you think a bit more: Now my problem is convincing the group I do know it. LOL >Another example: > >def f(): > return f > >g = f()()()()()()()()()()() > >is perfectly valid. Good example! Yes, I realize it. As I said before I just haven't come across this particular variation before using decorators so it wasn't clear to me at first, it is now. :) Read my reply to Bengt Richter. Thanks, this has been a very interesting discussion. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Dissection
On Sun, 03 Apr 2005 08:32:09 +0200, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >> I wasn't aware that the form: >> >> result = function(args)(args) >> >> Was a legal python statement. >> >> So python has a built in mechanism for passing multiple argument sets >> to nested defined functions! (click) Which means this is a decorator >> without the decorator syntax. > >No. There is no mechanism for passing multiple argument sets to >nested functions. Instead, functions are objects, which can be >assigned to variables, passed as arguments to other functions, >and returned: Yes there is, it's the stack python uses to interpret the byte code. But it's the same mechanism that is used for passing arguments to sequential function calls (objects) also. The only difference is the next function (object) is returned on the stack in the nested case. Then the next argument is then put on to the stack (passed), before the next function is called. How you view this depends on the frame of reference you use, I was using a different frame of reference, which I wasn't sure was correct at the time, but turns out is also valid. So both view points are valid. In any case, I now have a complete picture of how it works. Inside, and out. Which was my goal. :) >> So this isn't a decorator question any more. Each argument gets >> passed to the next inner defined function, via... a stack(?) ;) > >No, functions are objects. Notice that in step 1, the object returned >doesn't have to be a function - other things are callable, too, like >types, classes, and objects implementing __call__. They are objects; which are data structures; containing program code & data; which reside in memory; and get executed by, in this case, a byte code interpreter. The interpreter executes the byte code in a sequential manner, using a *stack* to call functions (objects), along with their arguments. For the record, I never had any trouble understanding the concept of objects. I think I first started programming OOP in the mid '90's with c++. It was the sequence of events in the objects of the nested def functions that I was trying to understand along with where the objects get their arguments, which isn't obvious because of the levels of indirect calling. Thanks for the help Martin, it's always appreciated. :) Cheers, Ron >Regards, >Martin -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorater inside a function? Is there a way?
On 3 Apr 2005 11:17:35 -0700, "George Sakkis" <[EMAIL PROTECTED]>
wrote:
>>def define(func):
>>if not ENABLE_TYPECHECKING:
>>return lambda func: func
>># else decorate func
>
>A small correction: The argument of the decorator is not 'func' but the
>parameter checks you want to enforce. A template for define would be:
>
>def define(inputTypes, outputType):
>if not ENABLE_TYPECHECKING:
>return lambda func: func
>def decorate(func):
>def typecheckedFunc(*args,**kwds):
># TYPECHECK *args, **kwds HERE #
>r = func(*args,**kwds)
># TYPECHECK r HERE #
>return r
>return typecheckedFunc
>return decorate
This is the same pattern I used except without the enable/disable at
the top.
The inline type check function also checks for TYPECHECK == True, and
TYPESTRICT == False, as default to determine the strictness of the
type checking wanted. Where TYPESTRICT == True, causes it to give an
error if they are not the correct type, even if they are the exact
value. TYPESTRICT == False, result in it trying to convert the object,
then checks it, by converting it back to the original type. If it's
still equal it returns the converted object in the specified type.
>Depending on how much flexibility you allow in inputTypes, filling in
>the typechecking logic can be from easy to challenging. For example,
>does typechecking have to be applied in all arguments or you allow
>non-typechecked aruments ? Can it handle *varargs and **kwdargs in the
>original function ? An orthogonal extension is to support 'templated
>types' (ala C++), so that you can check if something is 'a dict with
>string keys and lists of integers for values'. I would post my module
>here or the cookbook but at 560 (commented) lines it's a bit long to
>qualify for a recipe :-)
>
>George
Sounds like your version does quite a bit more than my little test
functions. :)
I question how far type checking should go before you are better off
with a confirmtypes() function that can do a deep type check. And then
how much flexibility should that have?
My view point is that type checking should be available to the
singleton types, with conversions only if data integrity can be
insured. ie.. the conversion is reversible with an "identical" result
returned.
def type_convert( a, t):
b = t(a)
aa = type(a)(b)
if a == aa:
return b
else:
raise TypeError
In cases where a conversion is wanted, but type checking gives an
error, an explicit conversion function or method should be used.
In containers, and more complex objects, deep type checking should be
available through a general function which can compare an object to a
template of types, specific to that object. It's important to use a
template instead of a sample, because a sample could have been
changed.
It's all about protecting the data content with a high degree of
confidence. In general, 98% of the time the current python way would
be adequate, but those remaining 2% are important enough to warrant
the additional effort that type checking takes.
On another note, there's the possibility that type checking in python
source code could make writing a compiler easier.
Another idea is that of assigning a name a type preference. And then
overload the assign operators to check for that first before changing
a name to point to a new object. It could probably be done with a
second name dictionary in name space with {name:type} pairs. With that
approach you only need to give key variables a type, then they keep
that type preference until it's assigned a new type, or removed from
the list. The down side to this is that it could slow things down.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Help me dig my way out of nested scoping
On 3 Apr 2005 14:12:48 -0700, "Brendan" <[EMAIL PROTECTED]> wrote: >Hi everyone > >I'm new to Python, so forgive me if the solution to my question should >have been obvious. I have a function, call it F(x), which asks for two >other functions as arguments, say A(x) and B(x). A and B are most >efficiently evaluated at once, since they share much of the same math, >ie, A, B = AB(x), but F wants to call them independantly (it's part of >a third party library, so I can't change this behaviour easily). My >solution is to define a wrapper function FW(x), with two nested >functions, AW(x) and BW(x), which only call AB(x) if x has changed. You have several easy choices, that would not require you modifying your program much. 1. Use the 'global' keyword to declare lastX, aLastX, and bLastX as globals, then all functions will have access to them. def FW(x): global lastX, aLastX, bLastX 2. Use function attributes, which are just names attached to the function using a '.'. def FW(x): # # Function body here # return F(AW, BW) FW.lastX = None FW.aLastX = None FW.bLastX = None result = FW(x) You will need to always include the FW. in front of those names. 3. Something else, that may help is you can return more than one value at a time. Python has this neat feature that you can have multiple items on either side of the '=' sign. a,b,c = 1,2,3 same as: a=1 b=2 c=3 And it also works with return statements so you can return multiple value. def abc(n): return n+1, n+2, n+3 a,b,c = abc(0) 5. Choice 5 and above is to rewrite your function as a class. Names in class's retain their values between calls and you can access those values the same way as accessing function attributes. Hope this helped. Cheers, Ron >To make this all clear, here is my (failed) attempt: > >#--begin code - > >from ThirdPartyLibrary import F >from MyOtherModule import AB > >def FW(x): >lastX = None >aLastX = None >bLastX = None > >def AW(x): >if x != lastX: >lastX = x ># ^ Here's the problem. this doesn't actually ># change FW's lastX, but creates a new, local lastX > >aLastX, bLastX = AB(x) >return aLastX > >def BW(x): >if x != lastX: >lastX = x ># ^ Same problem > >aLastX, bLastX = AB(x) >return bLastX > >#finally, call the third party function and return its result >return F(AW, BW) > ># end code - > >OK, here's my problem: How do I best store and change lastX, A(lastX) >and B(lastX) in FW's scope? This seems like it should be easy, but I'm >stuck. Any help would be appreciated! > > -Brendan -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Sun, 03 Apr 2005 23:59:51 +0200, "Martin v. Löwis"
<[EMAIL PROTECTED]> wrote:
>Ron_Adam wrote:
>> This would be the same without the nesting:
>>
>> def foo(xx):
>> global x
>> x = xx
>> return fee
>>
>> def fee(y):
>> global x
>> return y*x
>>
>> z = foo(2)(6)
>
>Actually, it wouldn't.
Ok, yes, besides the globals, but I figured that part is obvious so I
didn't feel I needed to mention it. The function call works the same
even though they are not nested functions.
>>
>> It's not entirely a misconception. Lets see where this goes...
>>
>>
>>>>>>dis.dis(compiler.compile('foo(2)(6)','','eval'))
>>>
>>> 1 0 LOAD_NAME0 (foo)
>>> 3 LOAD_CONST 1 (2)
>>> 6 CALL_FUNCTION1
>>> 9 LOAD_CONST 2 (6)
>>> 12 CALL_FUNCTION1
>>> 15 RETURN_VALUE
>
>Hmm. If you think that this proves that (2)(6) is being *passed*, you
>still might have a misconception. What this really does is:
I didn't say they were passed at the same time by the stack. It just
shows my reference to *stacks* was correct, and that there's is an
underlying mechanism for calling functions and passing arguments and
functions that use the stack. I however was not yet aware (yesterday
afternoon) of just how the stack worked in this case. This was very
much a figure it out as you go exercise.
Yesterday, I had made the incorrect judgement that since the functions
are all nested inside a defined function, that I should treat them as
a group instead of individual functions. But that wasn't the correct
way of viewing it. They are in a group in that they share name space,
so I figured, (incorectly), that they shared an argument list somehow,
and those where passed to the group. The passing of the function, and
it's arguments silently was a big reason for me jumping to this
conclusion.
So my reference to:
>>The interesting thing about this is the 'return fee' statement gets
>>the (6) apparently appended to it. So it becomes 'return fee(6).
Which is not correct, as the order of events is wrong and they do not
share a common argument list.
The correct order is:
return fee
fee(6)
with the fee(6) being evaluated after the return statement is
executed.
Another contributing factor is two days of really poor sleep. Which
probably is a bigger factor than I would like to admit. I really feel
I should have gotten it much sooner. But I did get-it, a little bit
at a time, and had a lot of terrific help along the way. :-)
>> Or it could be said equally the functions (objects) are passed with
>> the stack. So both view are correct depending on the view point that
>> is chosen.
>
>Maybe I don't understand your view, when you said
>
># No, I did not know that you could pass multiple sets of arguments to
># nested defined functions in that manner.
My views have changed as I added the missing peices to the puzzle
yesterday.
At first I didn't see how they were passed at all, in a group or
otherwise. There wasn't any one-to-one way to match the arguments up
visually like there are in a normal function call.
My next thought was they are passed as a group, to the group of
defined functions that shared the same name space. (Everyone seems to
think I'm stuck on this one.)
My Next view, yesterday afternoon, was they were passed on a stack
somehow one at a time. This last one is not necessarily incorrect from
a byte code viewpoint, but it's not the best way to view the problem.
Today I believe I have the correct view as I've said this morning. I
could be wrong yet again. I hope not though I might have to give up
programming. :/
It's interesting that I have had several others tell me they had
trouble with this too.
So it is my opinion that decorators are a little too implicit. I
think there should be a way to make them easier to use while achieving
the same objective and use.
Thanks again for the reply, :)
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Help me dig my way out of nested scoping
On 3 Apr 2005 16:21:10 -0700, "Brendan" <[EMAIL PROTECTED]> wrote: >Thanks for the tips. Making FW a callable class (choice 5) seems to be >a good (if verbose) solution. I might just wrap my temporary values in >a list [lastX, lastA, lastB] and mutate them as Michael suggests. >Thanks to Michael especially for the explanation of the name-binding >process that's at the heart of the issue. > >The other choicess are not as helpful to me for the following reasons: > >choice 1: I don't want the temporary values of lastA and lastB to be >global variables in my case as they are great big numeric arrays, and >I'd like their memory to be reclaimed after FW is done. Generally global variables should be avoided in python if you are doing a large application. For smaller ones, they are ok, but they are just a little slower than local variables. You could use a classic class which is a good way to store a single group of data. The 'del' will unbind a name from an object so the objects can be garbage collected. class data: A = [] B = [] def countupdown(): for n in xrange(11): data.A.append(n) data.B.append(10-n) print data.A print data.B countupdown() # store data # Check out pickle module for this. del data >choice 2: I tried this without success. Using Micheal's example, I >would assume you mean something like this: def outer(): def inner(): outer.b += 1 print outer.b inner() outer.b = 1 # <-- initialize here after function of same name outer() # save data method here del outer # delete outer and it's attributes -- http://mail.python.org/mailman/listinfo/python-list
Decorator Base Class: Needs improvement.
Hi, Thanks again for all the helping me understand the details of
decorators.
I put together a class to create decorators that could make them a lot
easier to use.
It still has a few glitches in it that needs to be addressed.
(1) The test for the 'function' object needs to not test for a string
but an object type instead.
(2) If the same decorator instance is stacked, it will get locked in a
loop. But stacking different instances created from the same
decorator object works fine.
(3) It has trouble if a decorator has more than one argument.
But I think all of these things can be fixed. Would this be something
that could go in the builtins library? (After any issues are fixed
first of course.)
When these are stacked, they process all the prepossess's first, call
the decorator, then process all the postprocess's. It just worked out
that way, which was a nice surprise and makes this work a bit
different than the standard decorators.
Cheers,
Ron
#---start---
class Decorator(object):
"""
Decorator - A class to make decorators with.
self.function - name of function decorated.
self.arglist - arguments of decorator
self.preprocess - over ride to preprocess function arguments.
self.postprocess - over ride to postprocess function
return value.
Example use:
class mydecorator(Decorate):
def self.preprocess(self, args):
# process args
return args
def self.postprocess(self, results):
# process results
return results
deco = mydecorator()
@deco
def function(args):
# function body
return args
"""
function = None
arglist = []
def __call__(self, arg):
self.arglist.append(arg)
def _wrapper( args):
pre_args = self.preprocess(args)
result = self.function(pre_args)
return self.postprocess(result)
if 'function' in str(arg):
self.arglist = self.arglist[:-1]
self.function = arg
return _wrapper
return self
def preprocess(self, args):
return args
def postprocess(self, result):
return result
class mydecorator(Decorater):
def preprocess(self, args):
args = 2*args
return args
def postprocess(self, args):
args = args.upper()
args = args + str(self.arglist[0])
return args
deco = mydecorator()
@deco('xyz')
def foo(text):
return text
print foo('abc')
#---end---
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
Ok, that post may have a few(dozen?) problems in it. I got glitched
by idles not clearing variables between runs, so it worked for me
because it was getting values from a previous run.
This should work better, fixed a few things, too.
The decorators can now take more than one argument.
The function and arguments lists initialize correctly now.
It doesn't work with functions with more than one variable. It seems
tuples don't unpack when given to a function as an argument. Any way
to force it?
class Decorator(object):
"""
Decorator - A base class to make decorators with.
self.function - name of function decorated.
self.arglist - arguments of decorator
self.preprocess - over ride to preprocess function arguments.
self.postprocess - over ride to postprocess function
return value.
Example use:
class mydecorator(Decorate):
def self.preprocess(self, args):
# process args
return args
def self.postprocess(self, results):
# process results
return results
deco = mydecorator()
@deco
def function(args):
# function body
return args
"""
def __init__(self):
self.function = None
self.arglist = []
def __call__(self, *arg):
if len(arg) == 1:
arg = arg[0]
self.arglist.append(arg)
def _wrapper( *args):
if len(args) == 1:
args = args[0]
pre_args = self.preprocess(args)
result = self.function(pre_args)
return self.postprocess(result)
if 'function' in str(arg):
self.arglist = self.arglist[:-1]
self.function = arg
return _wrapper
return self
def preprocess(self, args):
return args
def postprocess(self, result):
return result
#---3---
class mydecorator(Decorator):
def preprocess(self, args):
args = 2*args
return args
def postprocess(self, args):
args = args.upper()
args = args + str(self.arglist[0])
return args
deco = mydecorator()
@deco('xyz')
def foo(text):
return text
print foo('abc')
#---2---
class decorator2(Decorator):
def preprocess(self, args):
return args+sum(self.arglist[0])
def postprocess(self, args):
return args
deco2 = decorator2()
@deco2(1,2)
def foo(a):
return a
print foo(1)
# This one doesn't work yet.
#---3---
class decorator3(Decorator):
pass
deco3 = decorator3()
@deco3
def foo(a,b):
return a,b
print foo(1,3)
--
http://mail.python.org/mailman/listinfo/python-list
Decorator Maker is working. What do you think?
Ok... it's works! :)
So what do you think?
Look at the last stacked example, it process the preprocess's first in
forward order, then does the postprocess's in reverse order. Which
might be usefull. Interesting in any case.
Making decorators with this class is a snap!
Any thoughts? Any improvements? Any bugs?
Cheers,
Ron_Adam
#---start---
class Decorator(object):
"""
Decorator - A base class to make decorators with.
self.function - the function decorated.
self.arglist - list of arguments of decorator
self.preprocess - over ride to preprocess function arguments.
self.postprocess - over ride to postprocess function
return value.
Example use:
class mydecorator(Decorate):
def self.preprocess(self, args):
# process args
return args
def self.postprocess(self, results):
# process results
return results
deco = mydecorator()
@deco
def function(args):
# function body
return args
"""
def __init__(self):
self.function = None
self.arglist = []
def __call__(self, *arg):
if len(arg) == 1:
arg = arg[0]
self.arglist.append(arg)
def _wrapper(*args):
pre_args = self.preprocess(*args)
if type(pre_args) is not tuple:
pre_args = (pre_args,)
result = self.function(*pre_args)
return self.postprocess(result)
if '=len(darg):
n = 0
return newarg
deco1 = decorator1()
@deco1('_-^-')
def test1(text):
return text
print test1('abcdefg')
# a_b-c^d-e_f-g^
#---2---
class decorator2(Decorator):
def postprocess(self, result):
result = result*1.0*self.arglist[0][0]/self.arglist[0][1]
return result
deco2 = decorator2()
@deco2(2,3)
def test2(a):
return a
print test2(7)
# 4.667
#---3---
class decorator3(Decorator):
def preprocess(self, arg1, arg2):
arg1 *= 2
arg2 *= 2
return arg1, arg2
deco3 = decorator3()
@deco3
def test3(a,b):
return a,b
print test3(1,3)
# (2, 6)
#---4---
class decorator4(Decorator):
def postprocess(self, result):
result = int(result/self.arglist[0])
return result
deco4 = decorator4()
@deco4(2)
def test4(n1,n2,n3,n4,n5):
return n1+n2+n3+n4+n5
print test4(1,2,3,4,5)
# 7
#---5---
class decorator5(Decorator):
def preprocess(self, arg):
arg *= 2
print 'Preprocess:',self.arglist[0][0],arg
return arg
def postprocess(self, result):
result *= 2
print 'Postprocess:',self.arglist[0][0],result
return result
deco5a = decorator5()
deco5b = decorator5()
deco5c = decorator5()
deco5d = decorator5()
deco5e = decorator5()
@deco5a('a')
@deco5b('b')
@deco5c('c')
@deco5d('d')
@deco5e('e')
def test5(i):
return i
print test5(10)
# Preprocess: a 20
# Preprocess: b 40
# Preprocess: c 80
# Preprocess: d 160
# Preprocess: e 320
# Postprocess: e 640
# Postprocess: d 1280
# Postprocess: c 2560
# Postprocess: b 5120
# Postprocess: a 10240
# 10240
#---end---
--
http://mail.python.org/mailman/listinfo/python-list
Re: Unexpected result when comparing method with variable
On Mon, 4 Apr 2005 23:34:41 -0400, David Handy <[EMAIL PROTECTED]> wrote: I'm not sure if this is the best way. But it might work. for method, params in deferred: method(*params) try: if method.im_func is c.f.im_func: # handle a special case except: if method is c.f: # handle a special case Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Maker is working. What do you think?
Hi again, If anyone is reading this. Fixed the endless loop when stacking the same decorator instance. You can now reuse the same exact decorator object either in stacks or on different functions with different arguments. The only quirk left is if you stack the same decorator object, and have arguments on some, but not others. The arguments get processed first while preprocessing, and last when postprocessing. That shouldn't be a problem though and is easy to avoid by using an empty argument lists where they are needed. Let me know if there are still any problems with it. I know someone will find this useful. Cheers, Ron (I left off the tests as they were getting to be quite long.) #---start--- class Decorator(object): """ Decorator - A base class to make decorators with. self.function - the function decorated. self.arg - argument list passed by decorator self.preprocess - over ride to preprocess function arguments self.postprocess - over ride to postprocess function results """ def __init__(self): self.function = None self.arglist = [] self.arg = None self._number = 0 def __call__(self, *arg): if 'http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Tue, 05 Apr 2005 06:52:58 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote:
>>Ok, yes, besides the globals, but I figured that part is obvious so I
>>didn't feel I needed to mention it. The function call works the same
>>even though they are not nested functions.
>
>I am afraid that is wrong. But be happy, this may be the key to what ISTM
>is missing in your concept of python functions ;-)
The expression in the form of "function(args)(args)" is the same
pattern in two "different" cases, which was all that I was trying to
say. Not that the exact process of the two different cases were the
same.
>So, no, it does not "work the same." In fact, it is a misconception to talk
>about
>a nested fee as if it existed ready to call in the same way as foo. It doesn't
>exist that way until the fee def is EXECUTED, producing the callable fee.
Ok, I'm going to have to be more careful in how I phrase things I
think, I tend to over-genralize a bit. I said they were "the same",
but meant similar, a mistake in wording, but not in my understanding.
But this is a good point. In my example the calling expression does
not yet know who the next tuple of arguments will go to until foo
returns it. That part is the same, but as you point out in a nested
scope foo defines fee then returns it. And in the non nested example
fee is already defined before foo is called. And they must use
globals to communicate because they are not share the same name space.
They differ because fee is temporary, in the nested version, only
existing until the expression foo(arg)(arg) is evaluated. It never
gets assigned a name in foo's parent name space. Do I have that
correct?
>We can use dis to see the above clearly:
Love the byte code walk through, Thanks. Is there a resource that
goes in depth on python byte code and the compiler? I haven't been
able to find much on it on google.
> >>> import time
> >>> def globalfun(): return '[%s]'%time.ctime()
> ...
> >>> foo()
>
> >>> foo()(111, 222)
> (111, 222)
> >>> foo()(333)
> (333, '[Mon Apr 04 22:31:23 2005][Mon Apr 04 22:31:23 2005]')
> >>> foo()(333)
> (333, '[Mon Apr 04 22:31:37 2005][Mon Apr 04 22:31:37 2005]')
I like your idea of using time stamps to trace code! :)
>[...]
>>Today I believe I have the correct view as I've said this morning. I
>>could be wrong yet again. I hope not though I might have to give up
>>programming. :/
>Don't give up. It would be boring if it were all instantly clear.
>The view is better after an enjoyable hike, and some of the flowers
>along the way may turn out prettier than whatever the vista at the
>top may be ;-)
I won't give up, at most I would take a break, but I love programming
too much to give it up. ;-)
>Maybe the above will help make functions and decorators a little easier
>to understand.
I understand functions, sometimes it's difficult to describe just what
it is I don't understand yet, and sometimes I fool myself by jumping
to an invalid conclusion a little too quickly. But I do this for
enjoyment and learning, so I'm not constrained by the need to not make
mistakes, (those are just part of learning in my oppinion), as I would
if my job depended on it. However it's a little frustrating when my
inability to write well, gets in the way of expressing myself
accurately.
But a few questions remain...
When a @decorator statement is found, How does the compiler handle it?
Let me see if I can figure this out...using dis. :)
>>> from dis import dis
>>> def deco1(d1): return d1
>>> def func1(f1):
@deco1
def func2(f2):
return f2
return func2(f1)
>>> func1(2)
2
>>> dis(deco1)
1 0 LOAD_FAST0 (d1)
3 RETURN_VALUE
>>> dis(func1)
2 0 LOAD_GLOBAL 0 (deco1)
3 LOAD_CONST 1 (", line 2>)
6 MAKE_FUNCTION0
9 CALL_FUNCTION1
12 STORE_FAST 1 (func2)
5 15 LOAD_FAST1 (func2)
18 LOAD_FAST0 (f1)
21 CALL_FUNCTION1
24 RETURN_VALUE
I'm not sure how to interpret this... Line 5 and below is the return
expression. The part above it is the part I'm not sure about.
Is the first CALL_FUNCTION calling deco1 with the result of the
defined functions reference, as it's argument? Then storing the result
of deco1 with the name func2?
If so the precompiler/parser is replacing the @deco1 with a call to
the deco1 function like this.
deco1( (def func2(f2):return f2) )
But this causes an illegal syntax error on the def statement. So you
can't do it directly. Or is there yet another way to view this? :)
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
On Tue, 05 Apr 2005 02:55:35 -0400, Steve Holden <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >> Ok, that post may have a few(dozen?) problems in it. I got glitched >> by idles not clearing variables between runs, so it worked for me >> because it was getting values from a previous run. >> >> This should work better, fixed a few things, too. >> >> The decorators can now take more than one argument. >> The function and arguments lists initialize correctly now. >> >Ron: > >I've followed your attempts to understand decorators with interest, and >have seen you engage in conversation with many luminaries of the Python >community, so I hesitate at this point to interject my own remarks. I don't mind. It might help me communicate my ideas better. >In a spirit of helpfulness, however, I have to ask whether your >understanding of decorators is different from mine because you don't >understand them or because I don't. Or it's just a communication problem, and we both understand. Communicating is not my strongest point. But I am always willing to clarify something I say. >You have several times mentioned the possibility of a decorator taking >more than one argument, but in my understanding of decorators this just >wouldn't make sense. A decorator should (shouldn't it) take precisely >one argument (a function or a method) and return precisely one value (a >decorated function or method). > >> It doesn't work with functions with more than one variable. It seems >> tuples don't unpack when given to a function as an argument. Any way >> to force it? What I was referring to is the case: @decorator(x,y,z) As being a decorator expression with more than one argument. and not: @decorator(x)(y) This would give a syntax error if you tried it. >>> @d1(1)(2) SyntaxError: invalid syntax The problem I had with tuple unpacking had nothing to do with decorators. I was referring to a function within the class, and I needed to be consistent with my use of tuples as arguments to functions and the use of the '*' indicator. >Do you understand what I mean when I say a decorator should take one >function as its argument and it should return a function? > >regards > Steve Hope this clarifies things a bit. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Maker is working. What do you think?
On 5 Apr 2005 00:35:45 -0700, "Kay Schluehr" <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >> Ok... it's works! :) >> >> So what do you think? > >Not much. As long as You do not present any nontrivial examples like >Guidos MultiMethod decorator or an implementation of the dispatching >"case" decorator I proposed that would benefit from factoring into pre- >and postprocessing the pattern has only limited use and worse it >suggests a misnomer: it obscures the semantics that is clearly >functional/OO not procedural. > >Regards, >Kay No good points at all? :/ Are you saying I need to present a non trivial example such as Guido's MultiMethod decorator, or the one you proposed? If I'm not mistaken, there is a difference. Those axamples are specific applications to solve specific problems using decorator expressions. While what I'm attempting here is a general purpose object in which many specific uses could more easily be built with. What about commonly used trivial cases? As a whole are they not non-trivial? How is it limited? I'm still not sure when, and what type of things, should and should not be done with decorators. It seems to me they may be good tools for validating inputs and outputs, and for getting data on program performance. In those cases, it would be good to be able to disable them when they aren't needed. Another use is to redefine a decorator as needed to adapt a function to a specific input or output. Is that what Guido's multimethod does? And they might also be used as a way to change the behavior of a group of related functions all at once at run time in order to save a lot of code duplication. I also think they could be abused easily and used when it would be better to just use a class to do it in the first place. Cheers, Ron] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
On Tue, 05 Apr 2005 14:32:59 -0700, Scott David Daniels <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >> What I was referring to is the case: >> @decorator(x,y,z) >> As being a decorator expression with more than one argument. >But, we generally say this is a call to a function named decorator >that returns a decorator. If you called it: > @make_decorator(x,y) > def . >We'd be sure we were all on the same page. Good point, I agree. :) Or alternatively @call_decorator(x,y) Using either one would be good practice. >How about this as an example: > > def tweakdoc(name): > def decorator(function): > function.__doc__ = 'Tweak(%s) %r' % (name, function.__doc__) > return function > return decorator > >What is confusing us about what you write is that you are referring to >tweakdoc as a decorator, when it is a function returning a decorator. Bengt Richter is also pointing out there is an inconsistency in Pythons documents in the use of decorator. I've been trying to start referring to the "@___" as the decorator-exression, but that still doesn't quite describe what it does either. Decorarator-caller might be better. Then the decorator-function as the part that defines the decorated-function. Another alternative is to call the entire process what it is, function-wrapping. Then the "@" statement would be the wrapper-caller, which calls the wrapper-function, which defines the wrapped-function. That's much more descriptive to me. If we do that then we could agree to use decorator as a general term to describe a function as decorated. Meaning it is wrapped and get away from the decorator/decoratoree discussions. But I think that the terminology has been hashed out quite a bit before, so I don't expect it to change. I'll just have to try to be clearer in how I discuss it. >> and not: >> @decorator(x)(y) > >This is only prevented by syntax (probably a good idea, otherwise >we would see some very complicated expressions before function >declarations). > >--Scott David Daniels >[EMAIL PROTECTED] And this isn't allowed either, although it represents more closely the nesting that takes place when decorator-expressions are stacked. @make_deco1 @make_deco2 @make_deco3 def function1(n): n+=1 return n This is allowed, but its not pretty. @make_deco1 @make_deco2 @make_deco3 def function1(n): n+=1 return n Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Docorator Disected
On Wed, 06 Apr 2005 00:23:01 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote: >I don't know of anything other than the compiler and cpython sources. >The byte codes are not all the same from version to version, since >added language features may require new byte code operations, at least >for efficiency, and (as our postings here show ;-) explaining code >clearly is as much of a job as writing it. Fortunately, python code >is pretty readable, and there is a lot of interesting reading in > > Python-2.4xxx\Lib\compiler\ and Python-2.4xxx\Lib\compiler\ > >on your disk if you download the source installation. Lots of >other goodies as well like demo code etc. Thanks, I've already downloaded the source as well as CVS, although I don't have a resent version of VisualC++. I Tried the Express version 8.0 since it's free, but it fails on the library with link errors. :-/, Not that I expected it to work since nothing I could find said it would. Probably easier to load up linux. But I don't need a compiler to read the source. > >One of the original examples of decorating was to replace the >staticmethod and classmethod function calls that had to be done >after the method defs. So the simplest view goes back to > >@deco >def foo(): pass > >being the equivalent (except if deco raises and exception) of > >def foo(): pass >foo = deco(foo) > >The plain name @deco was then allowed to become a simple xxx.yyy(zzz,...) >expression >returning a callable that would serve like the decorator function a bare name >normally >referred to. And then the mechanism could be cascaded. I suggest looking at the >code for the simplest cascade of normal decorators. E.g., (putting the example >in >the body of a function makes dis.dis easy ;-) So the @decorator functionality was a very small incremental change to the pre compiler. That also explains the nesting behavior of stacked decorators. A small change with a worth while functionality. :-) Looks like the @decorator is pseudo function limited to a single callable as it's body. (experimenting with a different syntax) @deco(a): def function(x): pass function = deco(a)(function)(x) Stacked, it would be: @deco1(a): @deco2(b): def function(x): return x+1 function = deco2(a)(function)(x) function = deco1(b)(function)(x) Each subsequent stacked [EMAIL PROTECTED] statement redefines "function" when it exits. If I have this correct, this would be the equivalent long hand of two stacked [EMAIL PROTECTED] expressions. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
On Tue, 05 Apr 2005 19:38:38 -0400, Steve Holden <[EMAIL PROTECTED]> wrote: >> >So what you are saying is that you would like to be able to use >arbitrarily complex expressions after the :at" sign, as long as they >return a decorator? If so, you've been "pronounced" :-) > >regards > Steve No not at all, I never said that. But.. ;-) If we get into what I would like, as in my personal wish list, that's a whole other topic. I would have preferred the @ symbol to be used as an inline assert introducer. Which would have allowed us to put debug code anywhere we need. Such as @print total @. Then I can switch on and off debugging statements by setting __debug__ to True or False where ever I need it. And as far as decorators go. I would of preferred a keyword, possibly wrap, with a colon after it. Something like this. def function_name(x): return x wrap function_name: wrapper1() wrapper2() wrapper3() A wrap command could more directly accomplish the wrapping, so that def statements within def statements aren't needed. (Unless you want'ed too for some reason.) And as far as arbitrary complex expressions go.. Actually I think that it's quite possible to do as it is. ;-) But this is just a few of my current thoughts which may very well change. It's an ever changing list. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
On Wed, 06 Apr 2005 16:33:24 +1200, Greg Ewing <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: >> I would have preferred the @ symbol to be used as an inline assert >> introducer. Which would have allowed us to put debug code anywhere we >> need. Such as @print total @. > >Don't lose heart, there are still two unused characters >left, $ and ?. > >? might even be more mnemonic for this purpose, as > > ?print "foo =", foo > >has a nice hint of "WT?%$%$ is going on at this point?" >to it. LOL, yes, it does. :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
On Wed, 06 Apr 2005 08:10:22 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote: >I don't understand your seeming fixation with wrappers and wrapping. Fixated implies, I'm stuck on a single thing, but I'm not. I am learning as I go, and exploring some possibilities as well. :-) > That's not >the only use for decorators. See Raymond Hettinger's optimizing decorators >in the cookbook for instance. Thanks, I'll look for it. Is it in the new edition? I haven't picked it up yet. >Decorators are something like metaclasses for functions, >with much more general possibilities than wrapping, IMO. I'm not sure I see the metaclass relation. What more general things can be done with Decorators, that can't be done with a wrapper? Wrapping, and the @decorator expressions, interest me because I see a lot of potential in it's use, and so I'm trying to learn them, and at the same time, there are things about the @ expression that seems (to me), that it's not the most practical way to do what it was intended for. On the plus side, it's kind of cute with the little curly thing propped up on top of the function. It's a neat trick that it does what it does with a minimal amount of changes to the language by taking advantage of pythons existing function perimeter and object passing properties. It saves a bit of typing because we don't have to retype the function name a few times. (Several people have referred to it as 'sugar', and now I am starting to agree with that opinion.) On the minus side, it's not intuitive to use. It is attached to the function definitions so they are limited, they can't be easily unwrapped and rewrapped without redefining the function also. The recursive nature of stacked @ statements is not visible. So my opinion of @ as a whole is currently: -1 >I think you'll have to show some convincing use cases showing a clear >advantage over current decoration coding if you want converts ;-) What about the following? :-) # Using this simple wrapper class: class wrapper(object): def __call__(self,x): # preprocess x x*=2 # Make a change so we can see it result = self.function(x) # postprocuess result return result # A function to apply the wrapper: def wrap(function,wrapper): w = wrapper() w.function = function return w # The function def fn(x): return x print fn(5) # Before # Wrapit. fn = wrap(fn,wrapper) print fn(5) # After # Unwrap it. fn = fn.function print fn(5) # And back again #prints #5 #10 #5 It has several advantages over @ expression. It doesn't need the triple nested defines to get the function name and argument list, the wrapper is simpler, It can be placed on a function and then removed, when and where it's needed, instead of at the point where the function is defined. The following behaves more closely to the existing @ expression in that it has the same nesting behavior for stacked wrappers. I'm looking into a way to do sequential non-nested stacked wrappers at this point, where the output of one goes to the input of the next. That can't be done currently with the @ decorator expression. This stacks a list of 10 wrappers on 10 different functions and reverses the order of the stack every other function. In this case they are all the same, but they could all be differnt. Cheers, Ron #---start--- class wrapper(object): def __call__(self,*x): # preprocess x = [x[0]+1,] print 'preprocess', x[0], self.args # call function result = self.function(*x) # postprocess result +=1 print 'postprocess', result, self.args return result def wrap(f,w,shape='forward'): if shape=='reverse': w.reverse() for ww in w: nw = wrapper() try: nw.args = ww[1] except TypeError: wf = ww[0] nw.function = f f = nw return f # Make a list of ten wrappers with an id number as an additional # wrapper perimeter. w = [] for n in xrange(10): w.append((wrapper,n)) # Wrap 10 functions, 10 times, in reversing order. def func0(x): return x def func1(x): return x def func2(x): return x def func3(x): return x def func4(x): return x def func5(x): return x def func6(x): return x def func7(x): return x def func8(x): return x def func9(x): return x func0 = wrap(func0,w) func1 = wrap(func1,w,'reverse') func2 = wrap(func2,w) func3 = wrap(func3,w,'reverse') func4 = wrap(func4,w) func5 = wrap(func5,w,'reverse') func6 = wrap(func6,w) func7 = wrap(func7,w,'reverse') func8 = wrap(func8,w) func9 = wrap(func9,w,'reverse') print func0(0) print func1(0) print func2(0) print func3(0) print func4(0) print func5(0) print func6(0) print func7(0) print func8(0) print func9(0) #--end-- -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Base Class: Needs improvement.
>I find I am still left asking the question "why would anyone want to do >that?". You didn't say which part you were referring too. As far as wrapping the same exact wrapper more than once. You probably wouldn't do that. It's just easier to test the use of multiple wrappers that way. As far as reversing the stack of wrappers, that could matter. Because the wrappers are nested, the inputs are done in forward order, and the results are sent back out in reverse order. If you are applying several different graphic filters to an image display function, the order makes a difference, so depending on weather you are modifying the input values, or the returned values, you may need to reverse the wrappers before applying them. >The difference between a use case and an example is that a use case >should demonstrate the solution of a problem that someone might >reasonably be wanting to solve, rather than a way of creating an >abstract program structure for which there is no easily observable >requirement. It was an example that demonstrates a basic capabilities of an alternative approach, but it is also one solution to certain sub-problems, simpler function wrapping, and adding and removing wrappers at locations other than the function definitions. It's probably not a new approach either. >I can understand it if you are merely pursuing this topic because of >your fascination with the capabilities of Python, but I don't have the >feeling that there are legion Python programmers out there waiting >impatiently to be able to build wrapped functions. Yes, I am pursuing the topic because I enjoy experimenting, and because I enjoy programming with Python, and I have an interest in using it in solving real problems. So the answer is all of the above. ;-) Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Lambda: the Ultimate Design Flaw
On 7 Apr 2005 11:11:31 -0400, [EMAIL PROTECTED] (Aahz) wrote: >You're conflating two different things: > >* Whether Python currently has only one way to do things > >* Whether Python has a design goal of only one way to do things > >I'll agree that Python currently has many examples of more than one way >to do things (and even Python 3.0 won't remove every example, because >anything more complicated than a Turing Machine has more than one way to >do things). But I won't agree that Only One Way has been abandoned as a >design principle. I would add that the meaning is: Python has one obvious best way to do things. Meaning that the most obvious and clearest way, the way that comes to mind first, will in most cases, also be the best way. I seem to remember reading it put in that way some place at some time. -- http://mail.python.org/mailman/listinfo/python-list
Re: Lambda: the Ultimate Design Flaw
On Thu, 7 Apr 2005 17:49:39 -0400, "Terry Reedy" <[EMAIL PROTECTED]> wrote: >"Ron_Adam" <[EMAIL PROTECTED]> wrote in message >news:[EMAIL PROTECTED] >> Python has one obvious best way to do things. > > >More exactly, 'should preferably have' rather than 'has'. > >> Meaning that the most obvious and clearest way, the way that comes to >> mind first, will in most cases, also be the best way. >> >> I seem to remember reading it put in that way some place at some time. > >>>> import this # The Zen of Python > >tjr ;-) Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on some stdlib modules
On Fri, 08 Apr 2005 05:15:23 -0400, vegetax <[EMAIL PROTECTED]> wrote: >Are those issues being considered right now? i cant find any PEP addressing >the issue especifically, at least cooking it for python 3000. > >specific topics could be: > >grouping related modules. >removing useless legacy modules. >refactoring duplicated functionality. >removing/redesigning poorly written modules. >adding a module versioning system. I've been thinking that the lib directory could be better named and rearranged a bit. I sometimes mistakenly open the libs directory instead of lib because of the name similarity. An alternative might be to use the name "packs" or "packages" in place of "lib", which would emphasize the use of packages as the primary method of extending python. The standard library could then be a package called "stdlib" within this directory. Third party packages would then be along side "stdlib" and not within a directory that is within the standard library. It would be mostly a cosmetic change, but I believe it would be worth doing if it could be done without breaking programs that may have hard coded path references to the library. :-/ Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on some stdlib modules
On Sat, 09 Apr 2005 02:22:45 -0400, Steve Holden <[EMAIL PROTECTED]>
wrote:
>Ron_Adam wrote:
>> On Fri, 08 Apr 2005 05:15:23 -0400, vegetax <[EMAIL PROTECTED]>
>> wrote:
>>
>>
>>>Are those issues being considered right now? i cant find any PEP addressing
>>>the issue especifically, at least cooking it for python 3000.
>>>
>>>specific topics could be:
>>>
>>>grouping related modules.
>>>removing useless legacy modules.
>>>refactoring duplicated functionality.
>>>removing/redesigning poorly written modules.
>>>adding a module versioning system.
>>
>>
>> I've been thinking that the lib directory could be better named and
>> rearranged a bit. I sometimes mistakenly open the libs directory
>> instead of lib because of the name similarity.
>>
>> An alternative might be to use the name "packs" or "packages" in place
>> of "lib", which would emphasize the use of packages as the primary
>> method of extending python. The standard library could then be a
>> package called "stdlib" within this directory. Third party packages
>> would then be along side "stdlib" and not within a directory that is
>> within the standard library.
>>
>> It would be mostly a cosmetic change, but I believe it would be worth
>> doing if it could be done without breaking programs that may have hard
>> coded path references to the library. :-/
>>
>> Ron
>>
>Ron:
>
>You do a lot of thinking, don't you? :-)
Just the way my mind works. ;-)
>This is a *very large* change, not a cosmetic one, requiring changes to
>many installation routines (including, probably, distutils) and causing
>problems for software that attempts to operate with multiple versions of
>Python - and those projects have problems enough as it is despite
>Python's quite fine record of careful development.
I thought it might be more involved than it seemed.
>This seems a rather high price to pay just to avoid having you
>mistakenly avoid opening "libs" instead of "lib" - a distinction that is
>only meaningful on Windows platforms anyway, I believe.
That's not surprising on windows.
>You are correct in suggesting that the library could be better organized
>than it is, but I felt we would be better off deferring such change
>until the emergence of Python 3.0, which is allowed to break backwards
>compatibility. So, start working on your scheme now - PEP 3000 needs
>contributions. My own current favorite idea is to have the current
>standard library become the "stdlib" package, but I'm sure a lot of
>people would find that suggestion at least as half-baked as yours.
Yes, I agree, the "stdlib" should be a package. So I don't find it
half-baked at all. Packages are part of python, so python should take
advantage of them.
As far as a organizing scheme, I've come to the conclusion, files
should be organized by who's responsible for them, as in who to
contact if something doesn't work correctly. And not allowing files to
be intermixed from different sources is definitely worth doing if
possible. Something Windows does very very badly.
For Python, that would mean packages should be fully self contained
and don't move any files to other directories if possible. Which
simplifies installs, uninstalls, and upgrades. But it would require
much more than a cosmetic change, and more than the simple, or not so
simple, directory changes I suggested.
One of the tools I wrote in C (early 90's), was a make file maker. I
still have the source code here somewhere. Starting with the main
source file and a template with the compile options in it, it searched
all included files recursively for references and built the make file
using the template. It really made large projects easy. I don't
think that's anything new now. Dist tools should do something like
that to find needed files. It shouldn't matter what directories they
are in as long as it has read access rights to them, or they are in
the search path, or there's a direct or indirect reference to them in
the source code someplace.
>{If an idea is more-half-baked than something exactly half-baked is it
>0.4-baked or 0.6-baked? Does "more half-baked" actually mean "less baked"?)
>
>regards
> Steve
All new ideas are un-baked, they aren't fully baked until they are old
ideas which have been implemented. So 0.6 baked is more than half
baked, and 0.4 baked is ... pudding. ;-)
I'll consider working on that PEP. It sounds like it might be a good
project for me.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on some stdlib modules
On Sun, 10 Apr 2005 13:18:01 +0200, "Fredrik Lundh" <[EMAIL PROTECTED]> wrote: >Kay Schluehr wrote: > >> I fear that Python 3.0 becomes some kind of vaporware in the Python >> community that paralyzes all redesign efforts on the std-lib. Even if Python 3.0 never materializes, The documented PEP may still have an impact on further development of Python. So it might also be referred to as PEP __future__ . Has there been any suggestion of a time line? If there is a new release every 18 months, v2.4 to v3.0 would be, 108 months?, Or would there be a jump from v2.5 or 2.6, to v3.0? >that, combined with the old observation that CPython developers, >when given a choice, prefer to write C code over Python code, is >making the standard library a lot less useful than it could be. > >(if you look at recent releases, most standard lib additions are things >that are fun for language tinkerers and people looking for many ways >to write simple algorithms, but very little stuff that's useful for >scripters >and application builders. a C implementation of _bisect. hello?) "Fun" things and demos could be put in an "extras". That might do a lot to clean up the library so that the rest of it can be put in better perspective. Also looking at my python24 directory there is a 'tools' dir that probably could be put in the extra package as well. >if I were in charge, I'd separate 90% of the standard library from the >core distribution, made sure it ran on multiple implementions (at least >the two latest CPython implementations, plus what's needed to make >as much as possible available on the latest Jython and IronPython >releases), bundled a number of carefully selected external libraries >(without forcing developers to give up rights and loose control over >maintenance), refactor the test suite so it could be used both to test >the library and to see what parts worked properly on your platform, >and make new releases (for testers and early adopters) available >regularily. > > Larger utility packages could be moved from the "stdlib" but still be included as separate packages that can optionally be installed from the python installer. Idle, distutils, tcl/tk, .. ? Probably a clearer definition of purpose for the different parts is needed. I haven't seen anything documented on that specifically. Has it been upto recently, 'more is better' as long as it doesn't break anything? With the emphasis on growing the language? What would definitions of 'purpose' be for? '__builtin__' '__builtins__' packages: 'stdlib' ... 'stdlib23', 'stdlib24' # Versions? 'extras' # examples, demos, and fun stuff other packages included in the install packages available at 'pythonpacks.com' (possible?) seperately installed applications ? Is it possible to get some sort of overview on extending python? Does one already exist? Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: templating system
On Sun, 10 Apr 2005 17:55:06 +0200, Ksenia Marasanova
<[EMAIL PROTECTED]> wrote:
>Hi,
>
>I am looking for fast, simple templating system that will allow me to
>do the following:
>- embed Python code (or some templating code) in the template
>- generate text output (not only XML/HTML)
>
>I don't need a framework (already have it), but just simple
>templating. The syntax I had in mind is something like that:
>
># in Python module
>def some_view():
># some code goes here...
>records = get_some_data()
>req = get_request_class()
>return template('some_template.tmpl', **locals()).render()
>
># in some_template.tmpl:
>
>
><%for record in records%>
><%=record.title%>
><%end for%>
>
>
>
>>From what I saw Cheetah seems to be the only one that can do it. I was
>hoping there might be alternatives that I've missed :)
>Thanks!
I use Cheetah along with publish.py by Chris Gonnerman. Cheetah
constructs web pages using .html templates and a python script, (which
is why I like it). Publish.py gives me fast one click publishing to
the web server, adding or removing files from the server to match my
local publish directory.
Looks like Chris also has a templating program here as well. I
haven't checked it out yet, so can't tell you about it.
http://newcenturycomputers.net/projects/webpub.html
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: web authoring tools
On Mon, 11 Apr 2005 05:14:47 GMT, "Brandon J. Van Every" <[EMAIL PROTECTED]> wrote: >As is easily noticed, my website sucks. Enough people keep ragging >on me about it, that maybe I'll up and do something about it. However, >I currently have FrontPage 2000 and I hate it. Ideally, I would like an >open source website + html design tool implemented in Python, so >that possibly someday I can fix whatever's broken about it. That said, >I would like a tool that actually saves me work as a web designer. I >don't feel that FrontPage 2000 does this. I'm saying there's a certain >level of maturity that has to exist in the app, it can't be some "alpha >quality" thing. If you know of such a beast in Python, please >let me know. I've always found a html aware text editor works best. Programs like front page tend to try to insert things when and where you don't want them. So you end up fighting the program and/or having to get their bugs out of your web site. I'm sure there are probably some good visual what you see is what you get editors, but Just have't found any I like. I would use commercial software package if I was doing an internet store with an inventory database, and shopping carts. That's a situation where you want your web sight to be in a standard proven format. But you hand over a lot of design freedom also. >Here are some examples of reasonable website designs for my purposes as >a game developer or consultant: > >http://www.igda.org/seattle/ >http://www.cyphondesign.com/ >http://www.alphageeksinc.com/ >http://www.gamasutra.com These top three where done with text editors. If you view the source, you will notice the formatting has good consistent indenting and there isn't a lot of extra tags or other information needlessly inserted. They make good use of CSS for formatting also. If this is the type of thing you want, save the pages and study how they did it. Use your own text and graphics of course. The fourth one in your list uses something called SiteCatalyst, which I'm not familiar with. The web site to it is listed in the source. You'll notice it has empty spaces and inconsistent indenting due to it being assembled from templates. >I'm not sure if I want a blogging capability, or something more like >Gamasutra. That's a quality vs. quantity issue. I don't know if I want >a web forum. I generally don't like web forums and I've tended to let >Yahoo! Groups do the mailing list job. Since you're unsure of what you want, you should probably follow the rule, 'if in doubt, leave it out', you can always expand or add features later. >I believe my webhost can take either Unix or Windows stuff. My local >machine where I do all development is Windows. I'd be interested to >know about Linux solutions too though. Here's a good place to start, to find python web site software. http://www.python.org/topics/web/ Hope this helped, Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
exporting imports to reduce exe size?
In looking at ways to reduce the size of exe's created with py2exe, I've noticed that it will include a whole library or module even if I only need one function or value from it. What I would like to do is to import individual functions and then export 'write' them to a common resource file and import that with just the resources I need in them. Has anyone tried this? I'm considering using pickle to do it, but was wondering if this is even a viable idea? Is it possible to pickle the name space after the imports and then reload the name space in place of the imports later? Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: exporting imports to reduce exe size?
On Tue, 12 Apr 2005 12:14:38 -0400, Chris Cioffi <[EMAIL PROTECTED]> wrote: >My first thought is what if the function you are using uses other >functions(maybe not exported by the module/package)? For example: if >some_func() makes a call to _some_weird_module_specific_func(), how >are you going to make sure you get the >_some_weird_module_specific_func function? This doesn't even start to >work when the function you are interested in is also using functions >from other modules/packages. > >Besides, what py2exe is really doing is grabbing the .pyo or .pyc >files and throwing them in a zip file. Is it really worth the effort >to save a 100K or so, at most? > >Chris > I think in some cases it could save quite a lot more than 100k when you consider that the module you include to get a single function, then may include 3 modules for other functions you don't need, and they in turn do the same. After looking into it some, I agree it's probably not worth the effort. Pythons pickle, shelve, and marshal functions don't work on most module objects, so it's pretty much a dead end. It would need be doable in a simple and general way to be worth while. Thanks anyways, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Simple Thunks
On Fri, 15 Apr 2005 16:44:58 -0700, Brian Sabbey <[EMAIL PROTECTED]> wrote: > >Simple Thunks >- > >Thunks are, as far as this PEP is concerned, anonymous functions that >blend into their environment. They can be used in ways similar to code >blocks in Ruby or Smalltalk. One specific use of thunks is as a way to >abstract acquire/release code. Another use is as a complement to >generators. I'm not familiar with Ruby or Smalltalk. Could you explain this without referring to them? >A Set of Examples >= > >Thunk statements contain a new keyword, 'do', as in the example below. The >body of the thunk is the suite in the 'do' statement; it gets passed to >the function appearing next to 'do'. The thunk gets inserted as the first >argument to the function, reminiscent of the way 'self' is inserted as the >first argument to methods. > >def f(thunk): >before() >thunk() >after() > >do f(): >stuff() > >The above code has the same effect as: > >before() >stuff() >after() You can already do this, this way. >>> def f(thunk): ... before() ... thunk() ... after() ... >>> def before(): ... print 'before' ... >>> def after(): ... print 'after' ... >>> def stuff(): ... print 'stuff' ... >>> def morestuff(): ... print 'morestuff' ... >>> f(stuff) before stuff after >>> f(morestuff) before morestuff after >>> This works with arguments also. >Other arguments to 'f' get placed after the thunk: > >def f(thunk, a, b): > # a == 27, b == 28 > before() > thunk() > after() > >do f(27, 28): > stuff() Can you explain what 'do' does better? Why is the 'do' form better than just the straight function call? f(stuff, 27, 28) The main difference I see is the call to stuff is implied in the thunk, something I dislike in decorators. In decorators, it works that way do to the way the functions get evaluated. Why is it needed here? When I see 'do', it reminds me of 'do loops'. That is 'Do' involves some sort of flow control. I gather you mean it as do items in a list, but with the capability to substitute the named function. Is this correct? Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Simple Thunks
On Sat, 16 Apr 2005 17:25:00 -0700, Brian Sabbey
<[EMAIL PROTECTED]> wrote:
>> You can already do this, this way.
>>
> def f(thunk):
>> ... before()
>> ... thunk()
>> ... after()
>> ...
> def before():
>> ... print 'before'
>> ...
> def after():
>> ... print 'after'
>> ...
> def stuff():
>> ... print 'stuff'
>> ...
> def morestuff():
>> ... print 'morestuff'
>> ...
> f(stuff)
>> before
>> stuff
>> after
> f(morestuff)
>> before
>> morestuff
>> after
>
>>
>> This works with arguments also.
>
>Yes, much of what thunks do can also be done by passing a function
>argument. But thunks are different because they share the surrounding
>function's namespace (which inner functions do not), and because they can
>be defined in a more readable way.
Generally my reason for using a function is to group and separate code
from the current name space. I don't see that as a drawback.
Are thunks a way to group and reuse expressions in the current scope?
If so, why even use arguments? Wouldn't it be easier to just declare
what I need right before calling the group? Maybe just informally
declare the calling procedure in a comment.
def thunkit: # no argument list defines a local group.
# set thunk,x and y before calling.
before()
result = thunk(x,y)
after()
def foo(x,y):
x, y = y, x
return x,y
thunk = foo
x,y = 1,2
do thunkit
print result
-> (2,1)
Since everything is in local name space, you really don't need to
pass arguments or return values.
The 'do' keyword says to evaluate the group. Sort of like eval() or
exec would, but in a much more controlled way. And the group as a
whole can be passed around by not using the 'do'. But then it starts
to look and act like a class with limits on it. But maybe a good
replacement for lambas?
I sort of wonder if this is one of those things that looks like it
could be useful at first, but it turns out that using functions and
class's in the proper way, is also the best way. (?)
>You're right that, in this case, it would be better to just write
>"f(stuff, 27, 28)". That example was just an attempt at describing the
>syntax and semantics rather than to provide any sort of motivation. If
>the thunk contained anything more than a call to 'stuff', though, it would
>not be as easy as passing 'stuff' to 'f'. For example,
>
>do f(27, 28):
> print stuff()
>
>would require one to define and pass a callback function to 'f'. To me,
>'do' should be used in any situation in which a callback *could* be used,
>but rarely is because doing so would be awkward. Probably the simplest
>real-world example is opening and closing a file. Rarely will you see
>code like this:
>
>def with_file(callback, filename):
> f = open(filename)
> callback(f)
> f.close()
>
>def print_file(file):
> print file.read()
>
>with_file(print_file, 'file.txt')
>
>For obvious reasons, it usually appears like this:
>
>f = open('file.txt')
>print f.read()
>f.close()
>
>Normally, though, one wants to do a lot more than just print the file.
>There may be many lines between 'open' and 'close'. In this case, it is
>easy to introduce a bug, such as returning before calling 'close', or
>re-binding 'f' to a different file (the former bug is avoidable by using
>'try'/'finally', but the latter is not). It would be nice to be able to
>avoid these types of bugs by abstracting open/close. Thunks allow you to
>make this abstraction in a way that is more concise and more readable than
>the callback example given above:
How would abstracting open/close help reduce bugs?
I'm really used to using function calls, so anything that does things
differently tend to be less readable to me. But this is my own
preference. What is most readable to people tends to be what they use
most. IMHO
>do f in with_file('file.txt'):
> print f.read()
>
>Thunks are also more useful than callbacks in many cases since they allow
>variables to be rebound:
>
>t = "no file read yet"
>do f in with_file('file.txt'):
> t = f.read()
>
>Using a callback to do the above example is, in my opinion, more
>difficult:
>
>def with_file(callback, filename):
> f = open(filename)
> t = callback(f)
> f.close()
> return t
>
>def my_read(f):
> return f.read()
>
>t = with_file(my_read, 'file.txt')
Wouldn't your with_file thunk def look pretty much the same as the
callback?
I wouldn't use either of these examples. To me the open/read/close
example you gave as the normal case would work fine, with some basic
error checking of course. Since Python's return statement can handle
multiple values, it's no problem to put everything in a single
function and return both the status with an error code if any, and the
result. I would keep the open, read/write, and close statements in
the same function and not split them up.
>> When I see 'do', it reminds me of 'do lo
Re: pre-PEP: Simple Thunks
On 17 Apr 2005 01:46:14 -0700, "Kay Schluehr" <[EMAIL PROTECTED]> wrote: >Ron_Adam wrote: > >> I sort of wonder if this is one of those things that looks like it >> could be useful at first, but it turns out that using functions and >> class's in the proper way, is also the best way. (?) > >I think Your block is more low level. Yes, that's my thinking too. I'm sort of looking to see if there is a basic building blocks here that can be used in different situations but in very consistent ways. And questioning the use of it as well. >It is like copying and pasting >code-fragments together but in reversed direction: ... Yes, in a function call, you send values to a remote code block and receive back a value. The reverse is to get a remote code block, then use it. In this case the inserted code blocks variables become local, So my point is you don't need to use arguments to pass values. But you do need to be very consistent in how you use the code block. (I'm not suggesting we do this BTW) The advantage to argument passing in this case would be that it puts a control on the block that certain arguments get assigned before it gets executed. Is it possible to have a tuple argument translation independently of a function call? This would also be a somewhat lower level operation, but might be useful, for example at a certain point in a program you want to facilitate that certain values are set, you could use a tuple argument parser to do so. It could act the same way as a function call argument parser but could be used in more places and it would raise an error as expected. Basically it would be the same as: def argset(x,y,z=1): return x,y,z But done in a inline way. a,b,c = (x,y,z=1) # looks familiar doesn't it. ;-) As an inline expression it could use '%' like the string methods, something like this? (x,y,z=1)%a,b,c # point x,y,z to a,b,c (?) And combined with code chunks like this. def chunk: # code chunk, ie.. no arguments. # set (x,y) # informal arguments commented return x+y # return a value for inline use value = (x,y)%a,b: chunk# use local code chunk as body I think this might resemble some of the suggested lambda replacements. The altenative might be just to have functions name space imported as an option. def f(x,y): return x+y z = dolocal f(1,2) #But why would I want to do this? I think these pre-peps are really about doing more with less typing and don't really add anything to Python. I also feel that the additional abstraction when used only to compress code, will just make programs harder to understand. >You have to change all the environments that use the thunk e.g. >renaming variables. It is the opposite direction of creating >abstractions i.e. a method to deabstract functions: introduce less >modularity and more direct dependencies. This is the reason why those >macros are harmfull and should be abandoned from high level languages ( >using them in C is reasonable because code expansion can be time >efficient and it is also a way to deal with parametric polymorphism but >Python won't benefit from either of this issues ). > >Ciao, >Kay I agree. If a code block of this type is used it should be limited to within the function it's defined in. The advantage, if it's used in a bare bones low level way with no argument passing, would be some performance benefits over function calls in certain situations. That is, a small reusable code block without the function call overhead. But only use it in the local name space it's defined in. Otherwise use a function or a class. Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Simple Thunks
On Sun, 17 Apr 2005 15:02:12 -0700, Brian Sabbey
<[EMAIL PROTECTED]> wrote:
Brian Sabbey wrote:
> I'm kicking myself for the first example I gave in my original post in
> this thread because, looking at it again, I see now that it really gives
> the wrong impression about what I want thunks to be in python. The
> 'thunkit' function above shouldn't be in the same namespace as the thunk.
> It is supposed to be a re-usable function, for example, to acquire and
> release a resource. On the other hand, the 'foo' function is supposed to
> be in the namespace of the surrounding code; it's not re-usable. So your
> example above is pretty much the opposite of what I was trying to get
> across.
This would explain why I'm having trouble seeing it then.
> def pickled_file(thunk, name):
> f = open(name, 'r')
> l = pickle.load(f)
> f.close()
> thunk(l)
> f = open(name, 'w')
> pickle.dump(l, f)
> f.close()
>
> Now I can re-use pickled_file whenever I have to modify a pickled file:
>
> do data in pickled_file('pickled.txt'):
> data.append('more data')
> data.append('even more data')
>
> In my opinion, that is easier and faster to write, more readable, and less
> bug-prone than any non-thunk alternative.
>
The above looks like it's missing something to me. How does 'data'
interact with 'thunk(l)'? What parts are in who's local space?
This might be the non-thunk version of the above.
def pickled_file(thunk, name):
f = open(name, 'r')
l = pickle.load(f)
f.close()
thunk(l)
f = open(name, 'w')
pickle.dump(l, f)
f.close()
def data_append(L):
L.append('more data')
L.append('still more data')
pickled_file(data_append, name)
I don't think I would do it this way. I would put the data
list in a class and add a method to it to update the pickle file. Then
call that from any methods that update the data list.
>> def with_file: # no argument list, local group.
>> f = open(filename)
>> t = callback(f)
>> f.close
>>
>> def my_read(f):
>> return f.read()
>>
>> callback = my_read
>> filename = 'filename'
>> do with_file
>
> This wouldn't work since with_file wouldn't be re-usable. It also doesn't
> get rid of the awkwardness of defining a callback.
As long as the name with_file isn't rebound to something else it could
be used as often as needed. I admit there are better ways to do it
though.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Simple Thunks
On Sun, 17 Apr 2005 19:56:10 -0700, Brian Sabbey
<[EMAIL PROTECTED]> wrote:
>I also wouldn't do it that way. I don't see a class as being much better,
>though. If I understand you correctly, with classes you would have
>something like:
>
>p = Pickled('pickled.txt')
>p.load()
>p.data.append('more data')
>p.data.append('even more data')
>p.dump()
The load and dump would be private to the data class object. Here's a
more complete example.
import pickle
class PickledData(object):
def __init__(self, filename):
self.filename = filename
self.L = None
try:
self._load()
except IOError:
self.L = []
def _load(self):
f = open(self.filename, 'r')
self.L = pickle.load(f)
f.close()
def _update(self):
f = open(self.filename, 'w')
pickle.dump(self.L, f)
f.close()
def append(self, record):
self.L.append(record)
self._update()
# add other methods as needed ie.. get, sort, clear, etc...
pdata = PickledData('filename')
pdata.append('more data')
pdata.append('even more data')
print pdata.L
['more data', 'even more data']
>This has the same issues as with opening and closing files: losing the
>'dump', having to always use try/finally if needed, accidentally
>re-binding 'p', significantly more lines. Moreover, class 'Pickled' won't
>be as readable as the 'pickled_file' function above since 'load' and
>'dump' are separate methods that share data through 'self'.
A few more lines to create the class, but it encapsulates the data
object better. It is also reusable and extendable.
Cheers,
Ron
>The motivation for thunks is similar to the motivation for generators--
>yes, a class could be used instead, but in many cases it's more work than
>should be necessary.
>
>-Brian
--
http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Suite-Based Keywords
On Mon, 18 Apr 2005 12:50:24 +0200, Reinhold Birkenfeld <[EMAIL PROTECTED]> wrote: >y = (f(11, 22, x=1, y='y for f') * > g(*args_from_somewhere, > x='x for g', y='y for g', > foo=lambda: return 'foo for g')) > >would be my current way to express this. But still, the less lines, >the less confusing it is. I would probably do it this way. y = f(11, 22, x=1, y='y for f') \ * g( *args_from_somewhere, x='x for g', y='y for g', foo=lambda: return 'foo for g' ) I tend to put the opperators on the left for continued lines. It's nice visual que to whats happening. if (a==1 or b==2 or c==3): x = ( 1. + the_last_value_i_needed + the_first_value_i_started_with + another_long_name_for_something ) This subject really hasn't been a problem for me. So I really don't see the point of adding a new syntax. And this works on the def side. def f( first, second, x=0, y='' ): # # rest of body # So is this new syntax just a way to keep the '()'s closer together? Cheers, Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Suite-Based Keywords
On Mon, 18 Apr 2005 14:06:08 -0700, "Robert Brewer" <[EMAIL PROTECTED]> wrote: >Reinhold Birkenfeld wrote: >> >y = (f(11, 22, x=1, y='y for f') * >> > g(*args_from_somewhere, >> > x='x for g', y='y for g', >> > foo=lambda: return 'foo for g')) >> > >> >would be my current way to express this. But still, the less lines, >> >the less confusing it is. > >And Ron Adam replied: >> I would probably do it this way. >> >> y = f(11, 22, x=1, y='y for f') \ >> * g( *args_from_somewhere, >> x='x for g', >> y='y for g', >> foo=lambda: return 'foo for g' ) > >Which are both prettier, until you actually try to use them: > g( *args_from_somewhere, x='x for g', y='y for g', foo=lambda: >return 'foo for g' ) >Traceback ( File "", line 1 >g( *args_from_somewhere, x='x for g', y='y for g', foo=lambda: >return 'foo for g' ) > ^ >SyntaxError: invalid syntax > I didn't test that particular part.. but this should work. It wasn't the fault of the formatting. ;-) def f(a,b,x=None,y=None): return 1 def g( args,x=None,y=None,foo=None): return 1 args_from_somewhere = (23,24) y = f(11, 22, x=1, y='y for f') \ * g( args_from_somewhere, x='x for g', y='y for g', foo=lambda foo: 'foo for g' ) print y 1 > >Robert Brewer >MIS >Amor Ministries >[EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: pre-PEP: Simple Thunks
On Mon, 18 Apr 2005 21:11:52 -0700, Brian Sabbey
<[EMAIL PROTECTED]> wrote:
>Ron_Adam wrote:
>> The load and dump would be private to the data class object. Here's a
>> more complete example.
>>
>> import pickle
>> class PickledData(object):
>> def __init__(self, filename):
>> self.filename = filename
>> self.L = None
>> try:
>> self._load()
>> except IOError:
>> self.L = []
>> def _load(self):
>> f = open(self.filename, 'r')
>> self.L = pickle.load(f)
>> f.close()
>> def _update(self):
>> f = open(self.filename, 'w')
>> pickle.dump(self.L, f)
>> f.close()
>> def append(self, record):
>> self.L.append(record)
>> self._update()
>> # add other methods as needed ie.. get, sort, clear, etc...
>>
>> pdata = PickledData('filename')
>>
>> pdata.append('more data')
>> pdata.append('even more data')
>>
>> print pdata.L
>> ['more data', 'even more data']
>>
>>
>>> This has the same issues as with opening and closing files: losing the
>>> 'dump', having to always use try/finally if needed, accidentally
>>> re-binding 'p', significantly more lines. Moreover, class 'Pickled' won't
>>> be as readable as the 'pickled_file' function above since 'load' and
>>> 'dump' are separate methods that share data through 'self'.
>>
>> A few more lines to create the class, but it encapsulates the data
>> object better. It is also reusable and extendable.
>
>This class isn't reusable in the case that one wants to pickle something
>other than an array. Every type of object that one would wish to pickle
>would require its own class.
...Or in a function, or the 3 to 6 lines of pickle code someplace.
Many programs would load data when they start, and then save it when
the user requests it to be saved. So there is no one method fits all
situations. Your thunk example does handle some things better.
Here's yet another way to do it, but it has some limitations as well.
import pickle
def pickle_it(filename, obj, commands):
try:
f = open(filename, 'r')
obj = pickle.load(f)
f.close()
except IOError:
pass
for i in commands:
i[0](i[1])
f = open(filename, 'w')
pickle.dump(obj, f)
f.close()
file = 'filename'
L = []
opps = [ (L.append,'more data'),
(L.append,'even more data') ]
pickle_it(file, L, opps)
>Also, this implementation behaves differently because the object is
>re-pickled after every modification. This could be a problem when writing
>over a network, or to a shared resource.
>
>-Brian
In some cases writing to the file after ever modification would be
desired. A way around that would be to use a buffer of some sort. But
then again, that adds another level of complexity and you would have
to insure it's flushed at some point which get's back to the issue of
not closing a file.
Thanks for explaining how thunks works. I'm still undecided on
whether it should be built in feature or not.
I would rather have a way to store a block of code and pass it to a
function, then execute it at the desired time. That would solve both
the issue where you would use a thunk, and replace lambdas as well.
But I understand there's a lot of resistance to that because of the
potential abuse.
Cheers,
Ron
--
http://mail.python.org/mailman/listinfo/python-list
