sqlite3, qmarks, and NULL values

2009-05-19 Thread Mitchell L Model
Suppose I have a simple query in sqlite3 in a function:

def lookupxy(x, y):
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 = ?",
 (x, y))

However, COL2 might be NULL. I can't figure out a value for y that would 
retrieve rows for which COL2 is NULL. It seems to me that I have to perform an 
awkward test to determine whether to execute a query with one question mark or 
two.

def lookupxy(x, y):
if y:
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 = ?",
 (x, y))
else:
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 IS NULL",
 (x,))

The more question marks involved the more complicated this would get, 
especially if question marks in the middle of several would sometimes need to 
be NULL. I hope I'm missing something and that someone can tell me what it is.
-- 
http://mail.python.org/mailman/listinfo/python-list


One function calling another defined in the same file being exec'd

2010-01-07 Thread Mitchell L Model

[Python 3.1]

I thought I thoroughly understood eval, exec, globals, and locals, but I
encountered something bewildering today. I have some short files I  
want to
exec. (Users of my application write them, and the application gives  
them a
command that opens a file dialog box and execs the chosen file. Users  
are

expected to be able to write simple Python scripts, including function
definitions. Neither security nor errors are relevant for the purposes  
of this

discussion, though I do deal with them in my actual code.)

Here is a short piece of code to exec a file and report its result.  
(The file

being exec'd must assign 'result'.)

def dofile(filename):
ldict = {'result': None}
with open(filename) as file:
exec(file.read(), globals(), ldict)
print('Result for {}: {}'.format(filename, ldict['result']))

First I call dofile() on a file containing the following:


def fn(arg):
return sum(range(arg))

result = fn(5)


The results are as expected.

Next I call dofile() on a slightly more complex file, in which one  
function

calls  another function defined earlier in the same file.


def fn1(val):
return sum(range(val))

def fn2(arg):
return fn1(arg)

result = fn2(5)


This produces a surprise:

NameError: global name 'fn1' is not defined

[1] How is it that fn2 can be called from the top-level of the script  
but fn1

cannot be called from fn2?

[2] Is this correct behavior or is there something wrong with Python  
here?


[3] How should I write a file to be exec'd that defines several  
functions that

call each other, as in the trivial fn1-fn2 example above?



--
http://mail.python.org/mailman/listinfo/python-list


Re: One function calling another defined in the same file being exec'd

2010-01-07 Thread Mitchell L Model
I forgot to offer one answer for question [3] in what I just posted: I  
can define all the secondary functions inside one main one and just  
call the main one. That provides a separate local scope within the  
main function, with the secondary functions defined inside it when  
(each time) the main function is called. Not too bad, but will freak  
out my users and it doesn't seem as if it should be necessary to  
resort to this.

--
http://mail.python.org/mailman/listinfo/python-list


Re: One function calling another defined in the same file being exec'd

2010-01-08 Thread Mitchell L Model


On Jan 7, 2010, at 10:45 PM, Steven D'Aprano > wrote an extensive answer to my questions about one function  
calling another in the same file being exec'd. His suggestion about  
printing out locals() and globals() in the various possible places  
provided the clues to explain what was going on. I would like to  
summarize what I have learned from this, because although I have known  
all the relevant pieces for many years I never put them together in a  
way that explains the odd behavior I observed.


Statements that bind new names -- assignment, def, and class -- do so  
in the local scope. While exec'ing a file the local scope is  
determined by the arguments passed to exec; in my case, I passed an  
explicit local scope. It was particularly obtuse of me not to notice  
the effects of this because I was intentionally using it so that an  
assignment to 'result' in the exec'd script would enable the exec'ing  
code to retrieve the value of result. However, although the purity of  
Python with respect to the binding actions of def and class statements  
is wonderful and powerful, it is very difficult cognitively to view a  
def on a page and think "aha! that's just like an assignment of a  
newly created function to a name", even though that is precisely the  
documented behavior of def. So mentally I was making an incorrect  
distinction between what was getting bound locally and what was  
getting bound globally in the exec'd script.


Moreover, the normal behavior of imported code, in which any function  
in the module can refer to any other function in the module, seduced  
me into this inappropriate distinction. To my eye I was just defining  
and using function definitions the way they are in modules. There is a  
key difference between module import and exec: as Steven pointed out,  
inside a module locals() is globals(). On further reflection, I will  
add that what appears to be happening is that during import both the  
global and local dictionaries are set to a copy of the globals() from  
the importing scope and that copy becomes the value of the module's  
__dict__ once import has completed successfully. Top-level statements  
bind names in locals(), as always, but because locals() and globals()  
are the same dictionary, they are also binding them in globals(), so  
that every function defined in the module uses the modified copy of  
globals -- the value of the module's __dict__ -- as its globals() when  
it executes. Because exec leaves locals() and globals() distinct,  
functions defined at the top level of a string being exec'd don't see  
other assignments and definitions that are also in the string.


Another misleading detail is that top-level expressions in the exec  
can use other top-level names (assigned, def'd, etc.), which they will  
find in the exec string's local scope, but function bodies do not see  
the string's local scope. The problem I encountered arises because the  
function definitions need to access each other through the global  
scope, not the local scope. In fact, the problem would arise if one of  
the functions tried to call itself recursively, since its own name  
would not be in the global scope. So we have a combination of two  
distinctions: the different ways module import and exec use globals  
and locals and the difference between top-level statements finding  
other top-level names in locals but functions looking for them in  
globals.


Sorry for the long post. These distinctions go deep into the semantics  
of Python namespaces, which though they are lean, pure, and beautiful,  
have some consequences that can be surprising -- more so the more  
familiar you are with other languages that do things differently.


Oh, and as far as using import instead of exec for my scripts, I don't  
think that's appropriate, if only because I don't want my  
application's namespace polluted by what could be many of these pseudo- 
modules users might load during a session. (Yes, I could remove the  
name once the import is finished, but importing solely for side- 
effects rather than to use the imported module is offensive. Well, I  
would be using one module name -- result -- but that doesn't seem to  
justify the complexities of setting up the import and accessing the  
module when exec does in principle just what I need.)


Finally, once all of this is really understood, there is a simple way  
to change an exec string's def's to bind globally instead of locally:  
simply begin the exec with a global declaration for any function  
called by one of the others. In my example, adding a "global fn1" at  
the beginning of the file fixes it so exec works.



global fn1# enable fn1 to be called from fn2!
def fn1(val):
   return sum(range(val))

def fn2(arg):
   return fn1(arg)

result = fn2(5)


--
http://mail.python.org/mailman/listinfo/python-list


Re: One function calling another defined in the same file being exec'd

2010-01-08 Thread Mitchell L Model


On Jan 8, 2010, at 9:55 AM, "Gabriel Genellina" [email protected]> wrote:




Ok - short answer or long answer?

Short answer: Emulate how modules work. Make globals() same as  
locals(). (BTW, are you sure you want the file to run with the  
*same* globals as the caller? It sees the dofile() function and  
everything you have defined/imported there...). Simply use:  
exec(..., ldict, ldict)


[1] How is it that fn2 can be called from the top-level of the  
script but fn1

cannot be called from fn2?


Long answer: First, add these lines before result=fn2(5):

print("globals=", globals().keys())
print("locals=", locals().keys())
import dis
dis.dis(fn2)

and you'll get:

globals()= dict_keys(['dofile', '__builtins__', '__file__',  
'__package__', '__name__', '__doc__'])

locals()= dict_keys(['result', 'fn1', 'fn2'])

So fn1 and fn2 are defined in the *local* namespace (as always  
happens in Python, unless you use the global statement). Now look at  
the code of fn2:


 6   0 LOAD_GLOBAL  0 (fn1)
 3 LOAD_FAST0 (arg)
 6 CALL_FUNCTION1
 9 RETURN_VALUE

Again, the compiler knows that fn1 is not local to fn2, so it must  
be global (because there is no other enclosing scope) and emits a  
LOAD_GLOBAL instruction. But when the code is executed, 'fn1' is not  
in the global scope...


Solution: make 'fn1' exist in the global scope. Since assignments  
(implied by the def statement) are always in the local scope, the  
only alternative is to make both scopes (global and local) the very  
same one.


This is very helpful additional information and clarification! Thanks.



This shows that the identity "globals() is locals()" is essential  
for the module system to work.


Yes, though I doubt more than a few Python programmers would guess  
that identity.




[2] Is this correct behavior or is there something wrong with  
Python here?


It's perfectly logical once you get it... :)


I think I'm convinced.



[3] How should I write a file to be exec'd that defines several  
functions that

call each other, as in the trivial fn1-fn2 example above?


Use the same namespace for both locals and globals:  
exec(file.read(), ldict, ldict)




I was going to say that this wouldn't work because the script couldn't  
use any built-in names, but the way exec works if the value passed for  
the globals argument doesn't contain an entry for '__builtins__' it  
adds one. I would have a further problem in that there are some names  
I want users to be able to use in their scripts, in particular classes  
that have been imported into the scope of the code doing the exec, but  
come to think of it I don't want to expose the entire globals()  
anyway. The solution is do use the same dictionary for both globals  
and locals, as you suggest, to emulate the behavior of module import,  
and explicitly add to it the names I want to make available (and since  
they are primarily classes, there are relatively few of those, as  
opposed to an API of hundreds of functions). Thanks for the help.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Python-list Digest, Vol 76, Issue 97

2010-01-09 Thread Mitchell L Model

On Jan 8, 2010, at 7:35:39 PM EST, Terry Reedy  wrote:


On 1/8/2010 12:02 PM, Mitchell L Model wrote:



On further reflection, I will add that
what appears to be happening is that during import both the global  
and
local dictionaries are set to a copy of the globals() from the  
importing

scope and that copy becomes the value of the module's __dict__ once
import has completed successfully.


I have no idea why you think that. The module dict starts empty  
except for __name__, __file__, and perhaps a couple of other  
'hidden' items. It is not a copy and has nothing to do with  
importing scopes.


Why I think -- or, rather, thought -- that was because of some  
defective experiments I ran. It was purely a delusion. Thank you for  
correcting it.




> and that copy becomes the value of the module's __dict__ once
> import has completed successfully.

That new dict becomes  .


Because exec leaves locals() and globals() distinct,


Not necessarily.

In 3.x, at least,
exec(s)
executes s in the current scope. If this is top level, where locals  
is globals, then same should be true within exec.


Yes. To simplify some of my ramblings and incorporate the points you  
and others have made, and to once again acknowledge Python's elegance,  
an important observation which I bet even a lot of serious Python  
programs don't realize (or at least not consciously) is that:

globals() is locals()
in the following contexts:
the interpreter top level
	the top level of a module (though as you point out, starts out as a  
very bare dictionary during import)

a string being exec'd when the call to exec includes
no dictionary argument(s)
one dictionary argument
the same dictionary as both the second and third arguments
The identity does not hold for:
	a string being exec'd when a different dictionary is provided as the  
second and third arguments to exec
	inside anything that creates a scope: a function definition, class  
definition, etc.


Did I get all that right? Are there any other contexts that should be  
included in these?




d = {}
exec(s, d)

In 3.x, at least, d will also be used as locals.


Yes, talking about 3.x.



exec(s, d, d)

Again, globals and locals are not distinct.

It would seem that in 3.x, the only way for exec to have distinct  
globals and locals is to call exec(s) where they are distinct or to  
pass distince globals and locals.


Apparently so. To clarify "where they are distinct", that would mean  
from a context in which they were already distinct, which is not the  
case if exec is called from the top level, but is the case if called  
from within, say, a function, as my code does.





Some of the issues of this thread are discussed in Language  
Reference 4.1, Naming and Binding. I suppose it could be clearer  
that it is, but the addition of nonlocal scope complicated things.



I pretty much have that section memorized and reread it at least  
monthly. It's part of what I meant by starting my original comments by  
saying that I thought I understood all of this. Thank you (and others)  
for helping clarify exactly what's going on. As with so many things in  
Python, it is not always easy to keep one's preconceptions, delusions,  
and experiences with other languages out of the way of its simplicity,  
even if one is a very experienced and knowledgeable Python programmer.


--- Mitchell-- 
http://mail.python.org/mailman/listinfo/python-list


sys.stdout vs. sys.stderr

2010-01-10 Thread Mitchell L Model
In Python 3.1 is there any difference in the buffering behavior of the  
initial sys.stdout and sys.stderr streams? They are both line_buffered  
and stdout doesn't seem to use a larger-grain buffering, so they seem  
to be identical with respect to buffering. Were they different at some  
earlier point in Python's evolution?

--
http://mail.python.org/mailman/listinfo/python-list


Re: I really need webbrowser.open('file://') to open a web browser

2010-01-27 Thread Mitchell L Model


On Jan 15, 2010, at 3:59 PM, Timur Tabi 


After reading several web pages and mailing list threads, I've learned
that the webbrowser module does not really support opening local
files, even if I use a file:// URL designator.  In most cases,
webbrowser.open() will indeed open the default web browser, but with
Python 2.6 on my Fedora 10 system, it opens a text editor instead.  On
Python 2.5, it opens the default web browser.

This is a problem because my Python script creates a local HTML file
and I want it displayed on the web browser.

So is there any way to force webbrowser.open() to always use an actual
web browser?


I had some discussions with the Python documentation writers that led  
to the following note being included in the Python 3.1 library  
documentation for webbrowser.open: "Note that on some platforms,  
trying to open a filename using this function, may work and start the  
operating system’s associated program. However, this is neither  
supported nor portable." The discussions suggested that this lack of  
support and portability was actually always the case and that the  
webbrowser module is simply not meant to handle file URLs. I had taken  
advantage of the accidental functionality to generate HTML reports and  
open them, as well as to open specific documentation pages from within  
a program.


You can control which browser opens the URL by using webbrowser.get to  
obtain a controller for a particular browser, specified by its  
argument, then call the open method on the controller instead of the  
module.


For opening files reliability and the ability to pick a particular  
program (browser or otherwise) to open it with you might have to  
resort to invoking a command line via subprocess.Popen.

--
http://mail.python.org/mailman/listinfo/python-list


Re: I really need webbrowser.open('file://') to open a web browser

2010-01-27 Thread Mitchell L Model

On Jan 27, 2010, at 3:31 PM, Timur Tabi wrote:

On Wed, Jan 27, 2010 at 12:29 PM, Mitchell L Model  
 wrote:


I had some discussions with the Python documentation writers that  
led to the
following note being included in the Python 3.1 library  
documentation for
webbrowser.open: "Note that on some platforms, trying to open a  
filename
using this function, may work and start the operating system’s  
associated

program. However, this is neither supported nor portable."


Then they should have renamed the API.  I appreciate that they're
finally documenting this, but I still think it's a bunch of baloney.


I agree, but I am pretty sure that, based on the discussions I had  
with the Python
documenters and developers, that there's no hope of winning this  
argument.
I suppose that since a file: URL is not, strictly speaking, on the  
web, that it
shouldn't be opened with a "web" browser. It's just that the "web"  
part of

"web browser" became more or less obsolete a long time ago since there
are so many more ways of using browsers and so many more things they can
do then just browse the web. So if you interpret the name "webbrowser"  
to mean
that it browses the web, as opposed to files, which means going  
through some
kind of server-based protocol, the module does what it says. But I  
still like
the idea of using it to open files, especially when I want the file to  
be opened

by its associated application and not a browser.



You can control which browser opens the URL by using webbrowser.get  
to
obtain a controller for a particular browser, specified by its  
argument,

then call the open method on the controller instead of the module.


How can I know which controller (application) the system will use when
it opens an http URL?  I depend on webbrowser.open('http') to choose
the best web browser on the installed system.  Does webbrowser.get()
tell me which application that will be?


webbrowser.get() with no arguments gives you the default kind of
browser controller, just as if you had used webbrowser.open()
directly.



For opening files reliability and the ability to pick a particular  
program
(browser or otherwise) to open it with you might have to resort to  
invoking

a command line via subprocess.Popen.


But that only works if I know which application to open.


Aha. You could use subprocess to specify the application from within  
your Python code,
but not to indicate "the user's default browser", unless the platform  
has a command for that.

On OS X, for instance, the command line:
open file.html
opens file.html with the application the user has associated with html  
files, whereas

open -a safari file.html
will open it with Safari even if the user has chosen Firefox for html  
files. There's
stuff like this for Windows, I suppose, but hardly as convenient. And  
I think that

Linux environments are all over the place on this, but I'm not sure.

webbrowser.get() returns a control object of the default class for the  
user's environment --

the one that means "use the default browser" so it won't help.
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3's adoption

2010-01-28 Thread Mitchell L Model
I have been working with Python 3 for over a year. I used it in  
writing my book "Bioinformatics Programming Using Python" (http://oreilly.com/catalog/9780596154509 
). I didn't see any point in teaching an incompatible earlier version  
of a language in transition. In preparing the book and its examples I  
explored a large number of Python modules in some depth and  
encountered most of the differences between the language and libraries  
of Python 2 and Python 3. The change was a bit awkward for a while,  
and there were some surprises, but in the end I have found nothing in  
Python 3 for which I would prefer Python 2's version.


Removal of old-style classes is a big win. Having print as a function  
provides a tremendous amount of flexibility. I use the sep and end  
keywords all the time. There is no reason for print to be a statement,  
and it was an awkward inconsistency in a language that leans towards  
functional styles. Likewise the elimination of cmp, while shocking,  
leads to much simpler comparison arguments to sort, since all the  
function does is return a key; then, sort uses __lt__ (I think) so it  
automatically uses each class's definition of that. The weird objects  
returned from things like sorted, dict.keys/values/items, and so on  
are values that in practice are used primarily in iterations; you can  
always turn the result into a list, though I have to admit that while  
developing and debugging I trip trying to pick out a specific element  
from one of these using indexing (typically [0]); I've learned to  
think of them as generators, even though they aren't. The  
rearrangements and name changes in the libraries are quite helpful. I  
could go on, but basically the language and library changes are on the  
whole large improvements with little, if any, downside.


Conversion of old code is greatly facilitied by the 2to3 tool that  
comes with Python 3. The big issue in moving from 2 to 3 is the  
external libraries and development tools you use. Different IDEs have  
released versions that support Python 3 at different times. (I believe  
Wing was the first.) If you use numpy, for example, or one of the many  
libraries that require it, you are stuck. Possibly some important  
facilities will never be ported to Python 3, but probably most active  
projects will eventually produce a Python 3 version -- for example,  
according to its web page, a Python 3 version  of PIL is on the way. I  
was able to cover all the topics in my book using only Python library  
modules, something I felt would be best for readers -- I used  
libraries such as elementree, sqlite3, and tkinter. The only  
disappointment was that I couldn't include a chapter on BioPython,  
since there is no Python 3 version.


By now, many large facilities support both Python 2 and Python 3. I am  
currently building a complex GUI/Visualization application based on  
the Python 3 version of PyQt4 and Wing IDE and am delighted with all  
of it. It may well be that some very important large 
--

http://mail.python.org/mailman/listinfo/python-list


Re: python 3's adoption

2010-01-28 Thread Mitchell L Model


On Jan 28, 2010, at 12:00 PM, [email protected] wrote:


From: Roy Smith 
Date: January 28, 2010 11:09:58 AM EST
To: [email protected]
Subject: Re: python 3's adoption


In article ,
Mitchell L Model  wrote:


I use the sep and end keywords all the time.


What are 'sep' and 'end'?  I'm looking in
http://docs.python.org/3.1/genindex-all.html and don't see those  
mentioned

at all.  Am I just looking in the wrong place?



Sorry -- I wasn't clear. They are keyword arguments to the print  
function.


--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3's adoption

2010-01-28 Thread Mitchell L Model


On Jan 28, 2010, at 1:40 PM, Terry Reedy  wrote

...



On 1/28/2010 11:03 AM, Mitchell L Model wrote:

I have been working with Python 3 for over a year. ...


I agree completely.


Such sweet words to read!



Conversion of old code is greatly facilitied by the 2to3 tool that  
comes

with Python 3. The big issue in moving from 2 to 3 is the external
libraries and development tools you use. Different IDEs have released
versions that support Python 3 at different times. (I believe Wing  
was
the first.) If you use numpy, for example, or one of the many  
libraries
that require it, you are stuck. Possibly some important facilities  
will

never be ported to Python 3, but probably most active projects will
eventually produce a Python 3 version -- for example, according to  
its
web page, a Python 3 version of PIL is on the way. I was able to  
cover
all the topics in my book using only Python library modules,  
something I
felt would be best for readers -- I used libraries such as  
elementree,

sqlite3, and tkinter. The only disappointment was that I couldn't
include a chapter on BioPython, since there is no Python 3 version.

By now, many large facilities support both Python 2 and Python 3. I  
am
currently building a complex GUI/Visualization application based on  
the
Python 3 version of PyQt4 and Wing IDE and am delighted with all of  
it.

It may well be that some very important large


Something got clipped ;-)


Thanks for noticing. Actually, I had abandoned that sentence and went  
back and
added more to the prior paragraph. Just never went back and deleted  
the false start.




Anyway, thank you for the report.



Glad to contribute; gladder to be appreciated.
--
http://mail.python.org/mailman/listinfo/python-list


lists as an efficient implementation of large two-dimensional arrays(!)

2010-02-02 Thread Mitchell L Model
An instructive lesson in YAGNI ("you aren't going to need it"),  
premature optimization, and not making assumptions about Python data  
structure implementations.


I need a 1000 x 1000 two-dimensional array of objects. (Since they are  
instances of application classes it appears that the array module is  
useless; likewise, since I am using Python 3.1, so among other things,  
I can't use numpy or its relatives.) The usage pattern is that the  
array is first completely filled with objects. Later, objects are  
sometimes accessed individually by row and column and often the entire  
array is iterated over.


Worried (unnecessarily, as it turns out) by the prospect of 1,000,000  
element list I started by constructing a dictionary with the keys 1  
through 1000, each of which had as its value another dictionary with  
the keys 1 through 1000. Actual values were the values of the second  
level dictionary.


Using numbers to fill the array to minimize the effect of creating my  
more complex objects, and running Python 3.1.1 on an 8-core Mac Pro  
with 8Gb memory, I tried the following


#create and fill the array:
t1 = time.time()
d2 = {}
for j in range(1000):
d2[j] = dict()
for k in range(1000):
d2[j][k] = k
print( round(time.time() - t1, 2))

0.41

# access each element of the array:
t1 = time.time()
for j in range(1000):
for k in range(1000):
elt = d2[j][k]
print( round(time.time() - t1, 2))

0.55

 My program was too slow, so I started investigating whether I could  
improve on the two-level dictionary, which got used a lot. To get  
another baseline I tried a pure 1,000,000-element list, expecting the  
times to be be horrendous, but look!


# fill a list using append
t1 = time.time()
lst = []
for n in range(100):
lst.append(n)
print( round(time.time() - t1, 2))

0.26

# access every element of a list
t1 = time.time()
for n in range(100):
elt = lst[n]
print( round(time.time() - t1, 2))

0.25

What a shock! I could save half the execution time and all my clever  
work and awkward double-layer dictionary expressions by just using a  
list!


Even better, look what happens using a comprehension to create the  
list instead of a loop with list.append:


t1 = time.time()
lst = [n for n in range(100)]
print( round(time.time() - t1, 2))

0.11

Half again to create the list.

Iterating over the whole list is easier and faster than iterating over  
the double-level dictionary, in particular because it doesn't involve  
a two-level loop. But what about individual access given a row and a  
column?


t1 = time.time()
for j in range(1000):
for k in range(1000):
elt = lst[j * 1000 + k]
print( round(time.time() - t1, 2))

0.45

This is the same as for the dictionary.

I tried a two-level list and a few other things but still haven't  
found anything that works better than a single long list -- just like  
2-D arrays are coded in old-style languages, with indices computed as  
offsets from the beginning of the linear sequence of all the values.  
What's amazing is that creating and accessing 1,000,000-element list  
in Python is so efficient. The usual moral obtains: start simple,  
analyze problems (functional or performance) as they arise, decide  
whether they are worth the cost of change, then change in very limited  
ways. And of course abstract and modularize so that, in my case, for  
example, none of the program's code would be affected by the change  
from a two-level dictionary representation to using a single long list.


I realize there are many directions an analysis like this can follow,  
and many factors affecting it, including patterns of use. I just  
wanted to demonstrate the basics for a situation that I just  
encountered. In particular, if the array was sparse, rather than  
completely full, the two-level dictionary implementation would be the  
natural representation.

--
http://mail.python.org/mailman/listinfo/python-list


CGI, POST, and file uploads

2010-03-02 Thread Mitchell L Model
Can someone tell me how to upload the contents of a (relatively small)  
file using an HTML form and CGI in Python 3.1? As far as I can tell  
from a half-day of experimenting, browsing, and searching the Python  
issue tracker, this is broken.  Very simple example:



  
  
  
http://localhost:9000/cgi/cgi-test.py";
  enctype="multipart/form-data"
  method="post">
File
   
Submit 

  



cgi-test.py:


#!/usr/local/bin/python3
import cgi
import sys
form = cgi.FieldStorage()
print(form.getfirst('contents'), file=sys.stderr)
print('done')


I run a CGI server with:

#!/usr/bin/env python3
from http.server import HTTPServer, CGIHTTPRequestHandler
HTTPServer(('', 9000), CGIHTTPRequestHandler).serve_forever()



What happens is that the upload never stops. It works in 2.6.

If I cancel the upload from the browser, I get the following output,  
so I know that basically things are working;

the cgi script just never finishes reading the POST input:

localhost - - [02/Mar/2010 16:37:36] "POST /cgi/cgi-test.py HTTP/1.1"  
200 -

<<>>

Exception happened during processing of request from ('127.0.0.1',  
55779)

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 281, in _handle_request_noblock

self.process_request(request, client_address)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 307, in process_request

self.finish_request(request, client_address)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 320, in finish_request

self.RequestHandlerClass(request, client_address, self)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 614, in __init__

self.handle()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 352, in handle

self.handle_one_request()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 346, in handle_one_request

method()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 868, in do_POST

self.run_cgi()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 1045, in run_cgi

if not self.rfile.read(1):
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socket.py", line 214, in readinto

return self._sock.recv_into(b)
socket.error: [Errno 54] Connection reset by peer



--
http://mail.python.org/mailman/listinfo/python-list


CGI, POST, and file uploads

2010-03-03 Thread Mitchell L Model


On Mar 2, 2010, at 4:48 PM, I wrote:

Can someone tell me how to upload the contents of a (relatively  
small) file using an HTML form and CGI in Python 3.1? As far as I  
can tell from a half-day of experimenting, browsing, and searching  
the Python issue tracker, this is broken.


followed by a detailed example demonstrating the problem.

Having hear no response, let me clarify that this request was  
preliminary to filing a bug report -- I wanted to make sure I wasn't  
missing something here. If nothing else, this failure should be  
documented rather than the 3.1 library documentation continuing to  
describe how to upload file contents with POST. If someone thinks  
there is a way to make this work in 3.1, or that it isn't a bug  
because CGI is hopeless (i.e., non-WSGI-compliant), or that the  
documentation shouldn't be changed, please respond. I'd rather have  
this particular discussion here than in the bug tracking system.


Meanwhile, let me heartily recommend the Bottle Web Framework (http://bottle.paws.de 
) for its simplicity, flexibility, and power. Very cool stuff. To make  
it work in Python3.1, do the following:

1. run 2to3 on bottle.py (the only file there is to download)
	2. copy or move the resulting bottle.py to the site-libs directory in  
your Python installation's library directory
	3. don't use request.GET.getone or request.POST.getone -- instead of  
getone, use get (the protocol changed to that of the Mapping ABC from  
the collections module)
	4. the contents of a file will be returned inside a cgi.FieldStorage  
object, so you need to add '.value' after the call to get in that case


--
http://mail.python.org/mailman/listinfo/python-list


Re: sys.stdout vs. sys.stderr

2010-03-09 Thread Mitchell L Model


On Jan 11, 2010, at 1:47 PM Nobody  wrote:




On Mon, 11 Jan 2010 10:09:36 +0100, Martin v. Loewis wrote:

In Python 3.1 is there any difference in the buffering behavior of  
the

initial sys.stdout and sys.stderr streams?


No.


Were they different at some earlier point in Python's evolution?


That depends on the operating system. These used to be whatever the
C library set up as stdout and stderr. Typically, they were buffered
in the same way.


On Unix, stdout will be line buffered if it is associated with a tty
and fully buffered otherwise, while stderr is always unbuffered.

On Windows, stdout and stderr are unbuffered if they refer to a  
character

device, fully buffered otherwise (Windows doesn't have line buffering;
setvbuf(_IOLBF) is equivalent to setvbuf(_IOFBF)).

ANSI C says:

As initially opened, the standard error stream is not fully  
buffered; the
standard input and standard output streams are fully buffered if and  
only
if the  stream can be determined not to refer to an interactive  
device.




I don't want to get into a quibble fight here, but I need to reraise  
this issue.
[I teach and write and want to make sure I get this right. I already  
have an
incorrect paragraph about this in my Bioinformatics Programming Using  
Python
book.] The key question here is line buffering vs full buffering. In  
Unix (at least
in an OS X Terminal), the following code prints a number every two  
seconds

in Python 2:

>>> for n in range(5):
. . .   print >> sys.stderr, n, # final , to not send 
newline
. . .   time.sleep(2)

However, in Python 3, similar code does not print the numbers until  
the whole

thing finishes (again, running from the terminal).

>>> for n in range(5):
. . .   print(n, file=sys.stderr, end='')
. . .   time.sleep(2)

So it appears that in a Unix terminal window, Python 2 does not line- 
buffer stderr
whereas Python 3 does. That's what tripped me up. While developing and  
debugging
code, I often print periods on a line as some loop progresses  
(sometimes every Nth
time around, for some reasonable N) just to know the pace of execution  
and that
the program is still doing something. In doing that recently in Python  
3 I discovered that
I either had to leave out the end='' or do sys.stderr.flush() after  
every print, which

amounts to the same thing.

This was a big surprise, after many, many years of C, C++,
Java, and Python programming -- I have always thought of stderr as  
completely
unbuffered in languages that have it. Doesn't mean some languages line- 
buffer
stderr on some platforms, just pointing out an assumption I've lived  
with for a very
long time that tripped me up writing a note about using stderr in  
Python 3 without

actually demonstrating the code and therefore not catching my error.-- 
http://mail.python.org/mailman/listinfo/python-list


invoking a method from two superclasses

2009-06-30 Thread Mitchell L Model
In Python 3, how should super() be used to invoke a method defined in C that 
overrides its two superclasses A and B, in particular __init__?

class A:
def __init__(self):
print('A')

class B:
def __init__(self):
print('B')


class C(A, B):
def __init__(self):
super().__init__()
print('C')

C()

Output is:
A
C

I've discovered the surprising fact described in the documentation of super

that specifying a class as the first argument of super means to skip that class 
when
scanning the mro so that  if C.__init__ includes the line
super(A, self).__init__()
what gets called is B.__init__, so that if I want to call __init__ of both 
classes the 
definition of C should have both of the following lines:
super().__init__()
super(A, self).__init__()
and that 
super(B, self).__init__()
does nothing because B is the last class in the mro.

This seems weird. Would someone please give a clear example and explanation of
the recommended way of initializing both superclasses in a simple multiple 
inheritance
situation?

Note: I am EXTREMELY knowledgeable about OO, Python, and many OOLs.
I don't mean to be arrogant, I just want to focus the discussion not open it to 
a broad
interchange about multiple inheritance, the ways it can be used or avoided, 
etc. I just
want to know how to use super. The documentation states the following:

"There are two typical use cases for super. In a class hierarchy with single 
inheritance,
super can be used to refer to parent classes without naming them explicitly, 
thus
making the code more maintainable."

"The second use case is to support cooperative multiple inheritance in a 
dynamic 
execution environment. This use case is unique to Python and is not found in 
statically compiled languages or languages that only support single 
inheritance. 
This makes it possible to implement "diamond diagrams" where multiple base 
classes implement the same method."

"For both use cases, a typical superclass call looks like this:

class C(B):
def method(self, arg):
super().method(arg)# This does the same thing as:
   # super(C, self).method(arg)
"

Though it claims to be demonstrating both cases, it is only demonstrating single
inheritance and a particular kind of multiple inheritance where the method is 
found
in only one class in the mro. This avoids situations where you want to call the
method anywhere it is found in the mro, or at least in the direct superclasses.
Perhaps __init__ is a special case, but I don't see how to figure out how to 
__init__
two superclasses of a class from the documentation. I often file "bug reports" 
about
documentation ambiguities, vagueness, incompletenesses, etc., but I don't want 
to
do so for this case until I've heard something definitive about how it should 
be 
handled.

Thanks in advance.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: invoking a method from two superclasses

2009-06-30 Thread Mitchell L Model
Allow me to add to my previous question that certainly the superclass
methods can be called explicitly without resorting to super(), e.g.:

class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)

My question is really whether there is any way of getting around the
explicit class names by using super() and if not, shouldn't the documentation
of super point out that if more than one class on the mro defines a method
only the first will get called?  What's strange is that it specifically mentions
diamond patterns, which is an important case to get right, but it doesn't show
how.

I suspect we should have a Multiple Inheritance HOWTO, though details and
recommendations would be controversial. I've accumulated lots of abstract
examples along the lines of my question, using multiple inheritance both to
create combination classes (the kinds that are probably best done with
composition instead of inheritance) and mixins. I like mixins, and I like 
abstract
classes. And yes I understand the horrors of working with a large component
library that uses mixins heavily, because I've experienced many of them, going
all the way back to Lisp-Machine Lisp's window system with very many combo
classes such as FancyFontScrollingTitledMinimizableWindow, or whatever.
Also, I understand that properties might be better instead of multiple 
inheritance 
for some situations. What I'm trying to do is puzzle out what the reasonable 
uses
of multiple inheritance are in Python 3 and how classes and methods that follow
them should be written.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: invoking a method from two superclasses

2009-06-30 Thread Mitchell L Model
>From: Scott David Daniels 
>Date: Tue, 30 Jun 2009 16:49:18 -0700
>Message-ID: 
>Subject: Re: invoking a method from two superclasses
>
>Mitchell L Model wrote:
>>In Python 3, how should super() be used to invoke a method defined in C
> > that overrides its two superclasses A and B, in particular __init__?
>>...
>>I've discovered the surprising fact described in the documentation of super
>><http://docs.python.org/3.1/library/functions.html#super>
>>that specifying a class as the first argument of super means to skip that 
>>class when
>>scanning the mro so that 
>>
>>This seems weird. Would someone please give a clear example and explanation of
>>the recommended way of initializing both superclasses in a simple multiple 
>>inheritance
>>situation?
>
>OK, in Diamond inheritance in Python (and all multi-inheritance is
>diamond-shaped in Python), the common ancestor must have a method
>in order to properly use super.  The mro is guaranteed to have the
>top of the split (C below) before its children in the mro, and the
>join point (object or root below) after all of the classes from
>which it inherits.
>
>So, the correct way to do what you want:
>class A:
>def __init__(self):
>super().__init__()
>print('A')
>
>class B:
>def __init__(self):
>super().__init__()
>print('B')
>
>class C(A, B):
>def __init__(self):
>super().__init__()
>print('C')
>
>C()
>
>And, if you are doing it with a message not available in object:
>
>class root:
>def prints(self):
>print('root') # or pass if you prefer
>
>class A(root):
>def prints(self):
>super().prints()
>print('A')
>
>class B(root):
>def prints(self):
>super().prints()
>print('B')
>
>class C(A, B):
>def prints(self):
>super().prints()
>print('C')
>
>C().prints()
>
>--Scott David Daniels
>[email protected]
>

Great explanation, and 1/2 a "duh" to me. Thanks.
What I was missing is that each path up to and including the top of the diamond
must include a definition of the method, along with super() calls to move the 
method
calling on its way up. Is this what the documentation means by
"cooperative multiple inheritance"?

If your correction of my example, if you remove super().__init__ from B.__init__
the results aren't affected, because object.__init__ doesn't do anything and
B comes after A in C's mro. However, if you remove super().__init__ from
A.__init__, it stops the "supering" process dead in its tracks.

It would appear that "super()" really means something like CLOS's 
call-next-method.
I've seen various discussions in people's blogs to the effect that super() 
doesn't really
mean superclass, and I'm beginning to develop sympathy with that view. I 
realize that
implementationally super is a complicated proxy; understanding the practical
implications isn't so easy. While I've seen all sorts of arguments and 
discussions,
including the relevant PEP(s), I don't think I've ever seen anyone lay out an 
example
such as we are discussing with the recommendation that basically if you are 
using
super() in multiple inheritance situations, make sure that the methods of all 
the classes
in the mro up to at least the top of a diamond all call super() so it can 
continue to
move the method calls along the mro. The documentation of super(), for instance,
recommends that all the methods in the diamond should have the same signature, 
but
and it says that super() can be used to implement the diamond, but it never 
actually
comes out and says that each method below the top must call super() at the risk
of the chain of calls being broken.  I do wonder whether this should go in the 
doc
of super, the tutorial, or a HOWTO -- it just seems to important and subtle to 
leave
for people to discover.

Again, many thanks for the quick and clear response.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: invoking a method from two superclasses

2009-07-01 Thread Mitchell L Model
[Continuing the discussion about super() and __init__]

The documentation of super points out that good design of diamond patterns 
require the methods to have the same signature throughout the diamond. That's 
fine for non-mixin classes where the diamond captures different ways of 
handling the same data. The classical example is 

BufferedStram
 / \
  /   \
/   \
 BufInputStrmBufOutputStrm  both have buffers, but use them 
differentlyu
\/
  \/
\/
RandomAccessStream  or something like that

The idea of the diamond is to have just one buffer, rather than the two buffers 
that would result in C++ without making the base classes virtual.  All four 
classes could define __init__ with the argument
filename, or whatever, and everything works fine.

The problems start with the use of mixins. In essence, mixins intentionally do 
NOT want to be part of diamond patterns. They are orthogonal to the "true" or 
"main" class hierarchy and just poke their heads in hear and there in that 
hierarchy. Moreover, a class could inherit from multiple mixins. Typical simple 
orthogonal mixins would be NamedObject, TrackedObject, LoggedObject, 
ColoredWidget, and other such 
names compounded from an adjective, participle, or gerund and a completely 
meaningless name such as Object or Thing and which classes typically manage one 
action or piece of state to factor it out from the many other classes that need 
it where the pattern of which classes need them does not follow the regular 
class hierarchy.  Suppose I have a class User that includes NamedObject, 
TrackedObject, and LoggedObject as base classes. (By Tracked I mean instances 
are automatically registered in a list or dictionary for use by class methods 
that search, count, or do other operations on them.)

The problem is that each mixin's __init__ is likely to have a different 
signature. User.__init__
would have arguments (self, name, log), but it would need to call each mixin 
class's __init__ with different arguments. Mixins are different than what the 
document refers to as cooperative multiple inheritance -- does that make them 
uncooperative multiple inheritance classes :-)? I think I'd be back to having 
to call each mixin class's __init__ explicitly:

class User(NamedObject, TrackedObject, LoggedObject)
def __init__(self, name, log):
NamedObject.__init__(self, name)
TrackedObject.__init__(self)
LoggedObject.__init__(self, log)

This is not terrible. It seems somehow appropriate that because mixin use is 
"orthogonal" to the "real" inheritance hierarchy, they shouldn't have the right 
to use super() and the full mro (after all, who knows where the mro will take 
these calls anyway). And in many cases, two mixins will join into another (a 
NamedTrackedObject) for convenience, and then classes inheriting from that have 
one less init to worry about.

I suspect this problem is largely with __init__. All the other __special__ fns 
have defined argument lists and so can be used with diamond patterns, mixins, 
etc.

Does this all sound accurate?  (Again, not looking for design advice, just 
trying to ferret out the subtleties and implications of multiple inheritance 
and super().)
-- 
http://mail.python.org/mailman/listinfo/python-list