Re: Swapping superclass from a module

2009-05-17 Thread Michele Simionato
Try this:

class Base(object):
pass

class C(Base):
pass

class NewBase(object):
pass

C.__bases__ = (NewBase,)

help(C)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using urlretrive/urlopen

2009-05-17 Thread rustom
On May 16, 6:30 am, "Gabriel Genellina" 
wrote:
> En Fri, 15 May 2009 12:03:09 -0300, Rustom Mody   
> escribió:
>
> > I am trying to talk to a server that runs on localhost
> > The server runs onhttp://localhost:7000/and that opens alright in  a
> > web browser.
>
> > However if I use urlopen or urlretrieve what I get is this 'file' --
> > obviously not the one that the browser gets:
>
> > 
> > Query 'http://localhost:7000/'not implemented
> > 
> > Any tips/clues?
>
> Please post the code you're using to access the server.
> Do you have any proxy set up?
>
> --
> Gabriel Genellina

Thanks Gabriel!

urlopen("http://localhost:7000";, proxies={})
seems to be what I needed.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Peter Otten
Terry Reedy wrote:

> Steven D'Aprano wrote:
>> On Sat, 16 May 2009 09:55:39 -0700, Emanuele D'Arrigo wrote:
>> 
>>> Hi everybody,
>>>
>>> let's assume I have a module with loads of classes inheriting from one
>>> class, from the same module, i.e.:
>> [...]
>>> Now, let's also assume that myFile.py cannot be changed or it's
>>> impractical to do so. Is there a way to replace the SuperClass at
>>> runtime, so that when I instantiate one of the subclasses NewSuperClass
>>> is used instead of the original SuperClass provided by the first module
>>> module?
>> 
>> That's called "monkey patching" or "duck punching".
>> 
>> http://en.wikipedia.org/wiki/Monkey_patch
>> 
>> http://wiki.zope.org/zope2/MonkeyPatch
>> 
>> http://everything2.com/title/monkey%2520patch
> 
> If the names of superclasses is resolved when classes are instantiated,
> the patching is easy.  If, as I would suspect, the names are resolved
> when the classes are created, before the module becomes available to the
> importing code, then much more careful and extensive patching would be
> required, if it is even possible.  (Objects in tuples cannot be
> replaced, and some attributes are not writable.)

It may be sufficient to patch the subclasses:

$ cat my_file.py
class Super(object):
def __str__(self):
return "old"

class Sub(Super):
def __str__(self):
return "Sub(%s)" % super(Sub, self).__str__()

class Other(object):
pass

class SubSub(Sub, Other):
def __str__(self):
return "SubSub(%s)" % super(SubSub, self).__str__()

if __name__ == "__main__":
print Sub()

$ cat main2.py
import my_file
OldSuper = my_file.Super

class NewSuper(OldSuper):
def __str__(self):
return "new" + super(NewSuper, self).__str__()

my_file.Super = NewSuper
for n, v in vars(my_file).iteritems():
if v is not NewSuper:
try:
bases = v.__bases__
except AttributeError:
pass
else:
if OldSuper in bases:
print "patching", n
v.__bases__ = tuple(NewSuper if b is OldSuper else b
for b in bases)


print my_file.Sub()
print my_file.SubSub()
$ python main2.py
patching Sub
Sub(newold)
SubSub(Sub(newold))

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get Exif data from a jpeg file

2009-05-17 Thread Arnaud Delobelle
Daniel Fetchinson  writes:

>> I need to get the creation date from a jpeg file in Python.  Googling
>> brought up a several references to apparently defunct modules.  The best
>> way I have been able to find so far is something like this:
>>
>> from PIL import Image
>> img = Image.open('img.jpg')
>> exif_data = img._getexif()
>> creation_date = exif_data[36867]
>>
>> Where 36867 is the exif tag for the creation date data (which I found by
>> ooking at PIL.ExifTags.TAGS).  But this doesn't even seem to be
>> documented in the PIL docs.  Is there a more natural way to do this?
>
>
> Have you tried http://sourceforge.net/projects/exif-py/ ?
>
> HTH,
> Daniel

I will have a look - thank you.

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: KeyboardInterrupt catch does not shut down the socketserver

2009-05-17 Thread Igor Katson

Gabriel Genellina wrote:
En Sat, 16 May 2009 04:04:03 -0300, Igor Katson  
escribió:

Gabriel Genellina wrote:

En Fri, 15 May 2009 09:04:05 -0300, Igor Katson escribió:

Lawrence D'Oliveiro wrote:
In message , 
Igor Katson wrote:

Lawrence D'Oliveiro wrote:
In message , 
Igor Katson wrote:



I have problems in getting a SocketServer to shutdown.

Shutdown implies closing the listening socket, doesn't it?


No (perhaps it should, but that is another issue). There is a
documentation bug; BaseServer.shutdown is documented as "Tells the
serve_forever() loop to stop and waits until it does." [1]
The docstring is much more explicit: """Stops the serve_forever loop.
Blocks until the loop has finished. This must be called while
serve_forever() is running in another thread, or it will deadlock."""

So, if you have a single-threaded server, *don't* use shutdown(). 
And, to orderly close the listening socket, use server_close() 
instead. Your


Hmm. Gabriel, could you please show the same for the threaded 
version? This one deadlocks:

[code removed]


The shutdown method should *only* be called while serve_forever is 
running. If called after server_forever exited, shutdown() blocks 
forever.


[code removed]
But, what are you after, exactly? I think I'd use the above code only 
in a GUI application with a background server.

There are other alternatives, like asyncore or Twisted.
For now, I am just using server.server_close() and it works. The server 
itself is an external transaction manager for PostgreSQL, when a client 
connects to it, serialized data interchange beetween the server and the 
client starts, e.g. first the client sends data, then the server sends 
data, then again the client, then the server and so on.
I haven't used asyncore or Twisted yet, and didn't know about their 
possible usage while writing the project. I'll research in that direction.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrency Email List

2009-05-17 Thread David M. Besonen
On 5/16/2009 5:26 PM, Aahz wrote:

> On Sat, May 16, 2009, Pete wrote:
>
>> [email protected] is a new email list
>> for discussion of concurrency issues in python.
>
> Is there some reason you chose not to create a list on
> python.org?  I'm not joining the list because Google
> requires that you create a login.

i too would join if it was hosted at python.org, and will not
if it's hosted at google for the same reason.


  -- david

-- 
http://mail.python.org/mailman/listinfo/python-list



Re: Concurrency Email List

2009-05-17 Thread James Matthews
I second this. Google groups are annoying! Just request that it be added to
python.org

James

On Sun, May 17, 2009 at 12:22 PM, David M. Besonen  wrote:

> On 5/16/2009 5:26 PM, Aahz wrote:
>
> > On Sat, May 16, 2009, Pete wrote:
> >
> >> [email protected] is a new email list
> >> for discussion of concurrency issues in python.
> >
> > Is there some reason you chose not to create a list on
> > python.org?  I'm not joining the list because Google
> > requires that you create a login.
>
> i too would join if it was hosted at python.org, and will not
> if it's hosted at google for the same reason.
>
>
>  -- david
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 

http://www.goldwatches.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Your Favorite Python Book

2009-05-17 Thread James Matthews
For me it's any book on Django, Core Python 2nd Edition (which I will buy if
updated) and Python Power.



On Fri, May 15, 2009 at 7:05 PM, Lou Pecora wrote:

> In article
> ,
>  Mike Driscoll  wrote:
>
> > On May 11, 4:45 pm, Chris Rebert  wrote:
>
> > >
> > > I like "Python in a Nutshell" as a reference book, although it's now
> > > slightly outdated given Python 3.0's release (the book is circa 2.5).
> > >
> > > Cheers,
> > > Chris
>
> "Python in a Nutshell" -- Absolutely!  Covers a lot in an easily
> accessible way.  The first book I reach for.  I hope Martelli updates it
> to 3.0.
>
> --
> -- Lou Pecora
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic web parser

2009-05-17 Thread James Matthews
I don't see the issue of using urllib and Sqllite for everything you mention
here.

On Sat, May 16, 2009 at 4:18 PM, S.Selvam  wrote:

> Hi all,
>
> I have to design web parser which will visit the given list of websites and
> need to fetch a particular set of details.
> It has to be so generic that even if we add new websites, it must fetch
> those details if available anywhere.
> So it must be something like a framework.
>
> Though i have done some parsers ,but they will parse for a given
> format(For. eg It will get the data from  tag).But here each website
> may have different format and the information may available within any tags.
>
> I know its a tough task for me,but i feel with python it should be
> possible.
> My request is, if such thing is already available please let me know ,also
> your suggestions are welcome.
>
> Note: I planned to use BeautifulSoup for parsing.
>
> --
> Yours,
> S.Selvam
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


os.path.split gets confused with combined \\ and /

2009-05-17 Thread Stef Mientki

hello,

just wonder how others solve this problem:
I've to distribute both python files and data files.
Everything is developed under windows and now the datafiles contains 
paths with mixed \\ and /.

Under windows everthing is working well,
but under Ubuntu / Fedora sometimes strange errors occurs.
Now I was thinking that using os.path.split would solve all problems,
but if I've the following relative path

   path1/path2\\filename.dat

split will deliver the following under windows
  path = path1 / path2
  filename = filename.dat

while under Linux it will give me
  path = path1
  filename = path\\filename.dat

So I'm now planning to replace all occurences of os.path.split with a 
call to the following function


def path_split ( filename ) :
   # under Ubuntu a filename with both
   # forward and backward slashes seems to give trouble
   # already in os.path.split
   filename = filename.replace ( '\\','/')

   return os.path.split ( filename )

how do others solve this problem ?
Are there better ways to solve this problem ?

thanks,
Stef Mientki
--
http://mail.python.org/mailman/listinfo/python-list


Re: os.path.split gets confused with combined \\ and /

2009-05-17 Thread Chris Rebert
On Sun, May 17, 2009 at 3:11 AM, Stef Mientki  wrote:
> hello,
>
> just wonder how others solve this problem:
> I've to distribute both python files and data files.
> Everything is developed under windows and now the datafiles contains paths
> with mixed \\ and /.
> Under windows everthing is working well,
> but under Ubuntu / Fedora sometimes strange errors occurs.
> Now I was thinking that using os.path.split would solve all problems,
> but if I've the following relative path
>
>   path1/path2\\filename.dat
>
> split will deliver the following under windows
>  path = path1 / path2
>  filename = filename.dat
>
> while under Linux it will give me
>  path = path1
>  filename = path\\filename.dat
>
> So I'm now planning to replace all occurences of os.path.split with a call
> to the following function
>
> def path_split ( filename ) :
>   # under Ubuntu a filename with both
>   # forward and backward slashes seems to give trouble
>   # already in os.path.split
>   filename = filename.replace ( '\\','/')
>
>   return os.path.split ( filename )
>
> how do others solve this problem ?
> Are there better ways to solve this problem ?

Just always use forward-slashes for paths in the first place since
they work on both platforms.
But your technique seems a reasonable way of dealing with mixed-up
datafiles, since you're in that situation.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


python and samba

2009-05-17 Thread Fatih Tumen
Hi,
I am working on a directory synchronisation tool for Linux.

$ gcc --version
gcc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)
$ python -V
Python 2.5.2
$ smbclient --version
Version 3.0.28a

I first designed it to work on the local filesystem. I am using filecmp.py
(distributed with Python) for comparing the files. Of course I had to
modify it in a way that it will return object rather than stdout. I have
this compare method in my main class which is os.walk()-ing through
the directories and labeling the files on the GUI according to this returned

object by filecmp.py.
Now I want add support for syncing between (unix and NT) network
shares without having to change my abstraction much. At first googling
I hit a samba discussion appeared on this list ages ago. It did not help me
much though. I also found this pysmbc, libsmbclient bindings, written by
Tim Waug but it requires libsmbclient-3.2.x. which requires
libc6 >= 2.8~20080505 which comes with Intreprid but I have reasons
not to upgrade my Hardy.
Another thing I found was pysamba, by Juan M. Casillas,  but it needs
a bit of hack; samba needs to be configured with python.
I am looking for something that won't bother the end user nor me. I want
to keep it as simple as possible. I will distribute the libraries myself if
I have
to and if licence is not an issue.

If I have to summarise, I need to get m_time and size (and preferably stick
with
os.walk and filecmp.py if possible) and copy files back and forth using
samba (unix <-> NT).  I don't wanna reinvent the wheel or overengineer
anything but if I can do this without any additional libraries, great!

I would appreciate any advice on this.
Thanks..
-- 
   Fatih
-- 
http://mail.python.org/mailman/listinfo/python-list


filecmp.py licensing

2009-05-17 Thread Fatih Tumen
Hi,
As I mentioned on the other thread about samba, I am working on a
synchronisation project and using filecmp.py for comparing files. I
modified it according to my needs and planning to distribute it with my
package. At first glance it seems that filecmp.py is a part of Python
package. Though I don't see a licence header on the file I assume that
it is licensed under PSFL. I will distribute my project with GNU GPL or
Creative Commons BY-NC-SA. My question is if I renamed it and
put the Python attribution on the header, would it be alright?

What is the proper way of doing this?
-- 
   Fatih
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Emanuele D'Arrigo
Wow, thank you all. Lots of ideas and things to try! I wish I knew
which one is going to work best. The module I'm trying to (monkey!)
patch is pxdom, and as it is a bit long (5700 lines of code in one
file!) I'm not quite sure if the simplest patching method will work or
the more complicated ones are necessary. Let me chew on it for a
little while. Luckilly pxdom has a test suite and I should be able to
find out quite quickly if a patch has broken something. Let me try and
chew on it, I'll report back.

Thank you all!

Manu
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Circular relationship: object - type

2009-05-17 Thread Chris Rebert
On Thu, May 14, 2009 at 3:34 PM, Mohan Parthasarathy  wrote:
> Hi,
>
> I have read several articles and emails:
>
> http://www.cafepy.com/article/python_types_and_objects/python_types_and_objects.html#relationships-transitivity-figure
> http://mail.python.org/pipermail/python-list/2007-February/600128.html
>
>  I understand how type serves to be the default metaclass when an object is
> created and it also can be changed. I also read a few examples on why this
> metaclass can be a powerful concept. What I fail to understand is the
> circular relationship between object and type. Why does type have to be
> subclassed from object ? Just to make "Everything is an object and all
> objects are  inherited from object class".

Yes, essentially. It makes the system nice and axiomatic, so one
doesn't have to deal with special-cases when writing introspective
code.

Axiom 1. All classes ultimately subclass the class `object`.
Equivalently, `issubclass(X, object) and X.__mro__[-1] is object` are
true for any class `X`, and `isinstance(Y, object)` is true for all
objects `Y`.

Axiom 2. All (meta)classes are ultimately instances of the (meta)class `type`.
Equivalently, repeated application of type() to any object will
eventually result in `type`.

Any other formulation besides Python's current one would break these
handy axioms. The canonical object-oriented language, Smalltalk, had a
nearly identical setup with regard to its meta-objects.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Conceptual flaw in pxdom?

2009-05-17 Thread Emanuele D'Arrigo
Hi everybody,

I'm looking at pxdom and in particular at its foundation class
DOMObject (source code at the end of the message). In it, the author
attempts to allow the establishment of readonly and read&write
attributes through the special methods __getattr__ and __setattr__. In
so doing is possible to create subclasses such as:

class MyClass(DOMObject):

def __init__(self):
DOMObject.__init__(self)
self._anAttribute = "im_a_readonly_attribute"

## The presence of the following method allows
## read-only access to the attribute without the
## underscore, i.e.: aVar = myClassInstance.anAttribute
def _get_anAttribute(self): return self._anAttribute

   ## Uncommenting the following line allows the setting of
"anAttribute".
   ## Commented, the same action would raise an exception.
   ## def _set_anAttribute(self, value): self._anAttribute = value

This is all good and dandy and it works, mostly. However, if you look
at the code below for the method __getattr__, it appears to be
attempting to prevent direct access to -any- variable starting with an
underscore.

def __getattr__(self, key):
if key[:1]=='_':
raise AttributeError, key

But access isn't actually prevented because __getattr__ is invoked -
only- if an attribute is not found by normal means. So, is it just me
or that little snipped of code either has another purpose or simply
doesn't do the intended job?

Manu

-
class DOMObject:
"""Base class for objects implementing DOM interfaces

Provide properties in a way compatible with old versions of
Python:
subclass should provide method _get_propertyName to make a read-
only
property, and also _set_propertyName for a writable. If the
readonly
property is set, all other properties become immutable.
"""
def __init__(self, readonly= False):
self._readonly= readonly

def _get_readonly(self):
return self._readonly

def _set_readonly(self, value):
self._readonly= value

def __getattr__(self, key):
if key[:1]=='_':
raise AttributeError, key
try:
getter= getattr(self, '_get_'+key)
except AttributeError:
raise AttributeError, key
return getter()

def __setattr__(self, key, value):
if key[:1]=='_':
self.__dict__[key]= value
return

# When an object is readonly, there are a few attributes that
can be set
# regardless. Readonly is one (obviously), but due to a wart
in the DOM
# spec it must also be possible to set nodeValue and
textContent to
# anything on nodes where these properties are defined to be
null (with no
# effect). Check specifically for these property names as a
nasty hack
# to conform exactly to the spec.
#
if self._readonly and key not in ('readonly', 'nodeValue',
'textContent'):
raise NoModificationAllowedErr(self, key)
try:
setter= getattr(self, '_set_'+key)
except AttributeError:
if hasattr(self, '_get_'+key):
raise NoModificationAllowedErr(self, key)
raise AttributeError, key
setter(value)
-- 
http://mail.python.org/mailman/listinfo/python-list


Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Any attempt to do anything with Tkinter (save import) raises the
following show-stopping error:

"Traceback (most recent call last):
  File "", line 1, in 
  File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
python2.6/lib-tk/Tkinter.py", line 1645, in __init__
self._loadtk()
  File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
python2.6/lib-tk/Tkinter.py", line 1659, in _loadtk
% (_tkinter.TK_VERSION, tk_version)
RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)"

As you can see, I'm running the vanilla install python on OS X 10.5.7.
Does anyone know how I can fix this? Google searches have yielded
results ranging from suggestions it has been fixed (not for me) to
recommendations that the user rebuild python against a newer version
of libtk (which I have no idea how to do).

I would greatly appreciate any assistance the community can provide on
the matter.

Best,
Edward
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best practice for operations on streams of text

2009-05-17 Thread Beni Cherniavsky
On May 8, 12:07 am, MRAB  wrote:
> def compound_filter(token_stream):
>      stream = lowercase_token(token_stream)
>      stream = remove_boring(stream)
>      stream = remove_dupes(stream)
>      for t in stream(t):
>          yield t

The last loop is superfluous.  You can just do::

def compound_filter(token_stream):
 stream = lowercase_token(token_stream)
 stream = remove_boring(stream)
 stream = remove_dupes(stream)
 return stream

which is simpler and slightly more efficient.  This works because from
the caller's perspective, a generator is just a function that returns
an iterator.  It doesn't matter whether it implements the iterator
itself by containing ``yield`` statements, or shamelessly passes on an
iterator implemented elsewhere.
-- 
http://mail.python.org/mailman/listinfo/python-list


Fwd: [python-win32] Fwd: Autosizing column widths in Excel using win32com.client ?

2009-05-17 Thread James Matthews
-- Forwarded message --
From: Tim Golden 
Date: Sun, May 17, 2009 at 1:00 PM
Subject: Re: [python-win32] Fwd: Autosizing column widths in Excel using
win32com.client ?
To:
Cc: Python-Win32 List 


James Matthews wrote:

> -- Forwarded message --
> From: 
> Date: Fri, May 15, 2009 at 7:45 PM
> Subject: Autosizing column widths in Excel using win32com.client ?
> To: [email protected]
>
>
> Is there a way to autosize the widths of the excel columns as when you
> double click them manually?
>

Usual answer to this kind of question: record a macro
in Excel to do what you want, and then use COM to
automate that. On my Excel 2007, this is the VBA result
of recording:


Sub Macro2()
'
' Macro2 Macro
'
'
  Columns("A:A").Select
  Range(Selection, Selection.End(xlToRight)).Select
  Columns("A:D").EntireColumn.AutoFit
End Sub



You then just fire up win32com.client or comtypes,
according to taste and go from there. If you need
help with the COM side of things, post back here.

TJG
___
python-win32 mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-win32



-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Adding a Par construct to Python?

2009-05-17 Thread jeremy
>From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and correct.

The intention of this would be to speed up Python execution on multi-
core platforms. Within a few years we will see 100+ core processors as
standard and we need to be ready for that.

There could also be parallel versions of map, filter and reduce
provided.

BUT...none of this would be possible with the current implementation
of Python with its Global Interpreter Lock, which effectively rules
out true parallel processing.

See: 
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/

What do others think?

Jeremy Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread jeremy
On 17 May, 13:05, [email protected] wrote:
> From a user point of view I think that adding a 'par' construct to
> Python for parallel loops would add a lot of power and simplicity,
> e.g.
>
> par i in list:
>     updatePartition(i)
>
...actually, thinking about this further, I think it would be good to
add a 'sync' keyword which causes a thread rendezvous within a
parallel loop. This would allow parallel loops to run for longer in
certain circumstances without having the overhead of stopping and
restarting all the threads, e.g.

par i in list:
for j in iterations:
   updatePartion(i)
   sync
   commitBoundaryValues(i)
   sync

This example is a typical iteration over a grid, e.g. finite elements,
calculation, where the boundary values need to be read by neighbouring
partitions before they are updated. It assumes that the new values of
the boundary values are stored in temporary variables until they can
be safely updated.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Piet van Oostrum
> Edward Grefenstette  (EG) wrote:

>EG> Any attempt to do anything with Tkinter (save import) raises the
>EG> following show-stopping error:

>EG> "Traceback (most recent call last):
>EG>   File "", line 1, in 
>EG>   File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
>EG> python2.6/lib-tk/Tkinter.py", line 1645, in __init__
>EG> self._loadtk()
>EG>   File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
>EG> python2.6/lib-tk/Tkinter.py", line 1659, in _loadtk
>EG> % (_tkinter.TK_VERSION, tk_version)
>EG> RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)"

>EG> As you can see, I'm running the vanilla install python on OS X 10.5.7.
>EG> Does anyone know how I can fix this? Google searches have yielded
>EG> results ranging from suggestions it has been fixed (not for me) to
>EG> recommendations that the user rebuild python against a newer version
>EG> of libtk (which I have no idea how to do).

>EG> I would greatly appreciate any assistance the community can provide on
>EG> the matter.

Have you installed Tk version 8.5?

If so, remove it. You might also install the latest 8.4 version.
-- 
Piet van Oostrum 
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: [email protected]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Photoimage on button appears pixelated when button is disabled

2009-05-17 Thread Dustan
On May 15, 2:59 pm, Dustan  wrote:
> In tkinter, when I place a photoimage on a button and disable the
> button, the image has background dots scattered through the image.
> Searching the web, I wasn't able to find any documentation on this
> behavior, nor how to turn it off. So here I am. How do I keep this
> from happening?
>
> Also, how can I extract the base-64 encoding of a GIF, so I can put
> the image directly into the code instead of having to keep a separate
> file for the image?
>
> All responses appreciated,
> Dustan

At the very least, someone ought to be able to provide an answer to
the second question.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:

> From a user point of view I think that adding a 'par' construct to
> Python for parallel loops would add a lot of power and simplicity, e.g.
> 
> par i in list:
> updatePartition(i)
> 
> There would be no locking and it would be the programmer's
> responsibility to ensure that the loop was truly parallel and correct.

What does 'par' actually do there?

Given that it is the programmer's responsibility to ensure that 
updatePartition was actually parallelized, couldn't that be written as:

for i in list:
updatePartition(i)

and save a keyword?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Photoimage on button appears pixelated when button is disabled

2009-05-17 Thread Tim Golden

Dustan wrote:

On May 15, 2:59 pm, Dustan  wrote:

In tkinter, when I place a photoimage on a button and disable the
button, the image has background dots scattered through the image.
Searching the web, I wasn't able to find any documentation on this
behavior, nor how to turn it off. So here I am. How do I keep this
from happening?

Also, how can I extract the base-64 encoding of a GIF, so I can put
the image directly into the code instead of having to keep a separate
file for the image?

All responses appreciated,
Dustan


At the very least, someone ought to be able to provide an answer to
the second question.


Well I know nothing about Tkinter, but to do base64 encoding,
you want to look at the base64 module.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's the use of the else in try/except/else?

2009-05-17 Thread Beni Cherniavsky
[Long mail.  You may skip to the last paragraph to get the summary.]

On May 12, 12:35 pm, Steven D'Aprano wrote:
> To really be safe, that should become:
>
> try:
>     rsrc = get(resource)
> except ResourceError:
>     log('no more resources available')
>     raise
> else:
>     try:
>         do_something_with(rsrc)
>     finally:
>         rsrc.close()
>
> which is now starting to get a bit icky (but only a bit, and only because
> of the nesting, not because of the else).
>
Note that this example doesn't need ``else``, because the ``except``
clause re-raises the exception.  It could as well be::

try:
rsrc = get(resource)
except ResourceError:
log('no more resources available')
raise
try:
do_something_with(rsrc)
finally:
rsrc.close()

``else`` is relevant only if your ``except`` clause(s) may quietly
suppress the exception::

try:
rsrc = get(resource)
except ResourceError:
log('no more resources available, skipping do_something')
else:
try:
do_something_with(rsrc)
finally:
rsrc.close()

And yes, it's icky - not because of the ``else`` but because
aquisition-release done correctly is always an icky pattern.  That's
why we now have the ``with`` statement - assuming `get()` implements a
context manager, you should be able to write::

with get(resource) as rsrc:
do_something_with(rsrc)

But wait, what if get() fails?  We get an exception!  We wanted to
suppress it::

try:
with get(resource) as rsrc:
do_something_with(rsrc)
except ResourceError:
log('no more resources available, skipping do_something')

But wait, that catches ResourceError in ``do_something_with(rsrc)`` as
well!  Which is precisely what we tried to avoid by using
``try..else``!
Sadly, ``with`` doesn't have an else clause.  If somebody really
believes it should support this pattern, feel free to write a PEP.

I think this is a bad example of ``try..else``.  First, why would you
silently suppress out-of-resource exceptions?  If you don't suppress
them, you don't need ``else``.  Second, such runtime problems are
normally handled uniformely at some high level (log / abort / show a
message box / etc.), wherever they occur - if ``do_something_with(rsrc)
`` raises `ResourceError` you'd want it handled the same way.

So here is another, more practical example of ``try..else``:

try:
bar = foo.get_bar()
except AttributeError:
quux = foo.get_quux()
else:
quux = bar.get_quux()

assuming ``foo.get_bar()`` is optional but ``bar.get_quux()`` isn't.
If we had put ``bar.get_quux()`` inside the ``try``, it could mask a
bug.  In fact to be precise, we don't want to catch an AttributeError
that may happen during the call to ``get_bar()``, so we should move
the call into the ``else``::

try:
get_bar = foo.get_bar
except AttributeError:
quux = foo.get_quux()
else:
quux = get_bar().get_quux()

Ick!

The astute reader will notice that cases where it's important to
localize exception catching involves frequent excetions like
`AttributeError` or `IndexError` -- and that these cases are already
handled by `getattr` and `dict.get` (courtesy of Guido's Time
Machine).

Bottom line(s):
1. ``try..except..else`` is syntactically needed only when ``except``
might suppress the exception.
2. Minimal scope of ``try..except`` doesn't always apply (for
`AttirbuteError` it probably does, for `MemoryError` it probably
doesn't).
3. It *is* somewhat ackward to use, which is why the important use
cases - exceptions that are frequently raised and caught - deserve
wrapping by functions like `getattr()` with default arguments.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Grant Edwards
On 2009-05-17, Steven D'Aprano  wrote:
> On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:
>
>> From a user point of view I think that adding a 'par' construct to
>> Python for parallel loops would add a lot of power and simplicity, e.g.
>> 
>> par i in list:
>> updatePartition(i)
>> 
>> There would be no locking and it would be the programmer's
>> responsibility to ensure that the loop was truly parallel and correct.
>
> What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it
can execute any/all iterations of updatePartion(i) in parallel
(or presumably serially in any order) rather than serially in a
strict sequence.

> Given that it is the programmer's responsibility to ensure
> that updatePartition was actually parallelized, couldn't that
> be written as:
>
> for i in list:
> updatePartition(i)
>
> and save a keyword?

No, because a "for" loop is defined to execute it's iterations
serially in a specific order.  OTOH, a "par" loop is required
to execute once for each value, but those executions could
happen in parallel or in any order.

At least that's how I understood the OP.

-- 
Grant

-- 
http://mail.python.org/mailman/listinfo/python-list


pushback iterator

2009-05-17 Thread Matus
Hallo pylist,

I searches web and python documentation for implementation of pushback
iterator but found none in stdlib.

problem:

when you parse a file, often you have to read a line from parsed file
before you can decide if you want that line it or not. if not, it would
be a nice feature to be able po push the line back into the iterator, so
nest time when you pull from iterator you get this 'unused' line.

solution:
=
I found a nice and fast solution somewhere on the net:

-
class Pushback_wrapper( object ):
def __init__( self, it ):
self.it = it
self.pushed_back = [ ]
self.nextfn = it.next

def __iter__( self ):
return self

def __nonzero__( self ):
if self.pushed_back:
return True

try:
self.pushback( self.nextfn( ) )
except StopIteration:
return False
else:
return True

def popfn( self ):
lst = self.pushed_back
res = lst.pop( )
if not lst:
self.nextfn = self.it.next
return res

def next( self ):
return self.nextfn( )

def pushback( self, item ):
self.pushed_back.append( item )
self.nextfn = self.popfn
-

proposal:
=
as this is (as I suppose) common problem, would it be possible to extend
the stdlib of python (ie itertools module) with a similar solution so
one do not have to reinvent the wheel every time pushback is needed?


thx, Matus
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread bearophileHUGS
Jeremy Martin, nowadays a parallelfor can be useful, and in future
I'll try to introduce similar things in D too, but syntax isn't
enough. You need a way to run things in parallel. But Python has the
GIL.
To implement a good parallel for your language may also need more
immutable data structures (think about "finger trees"), and pure
functions can improve the safety of your code a lot, and so on.

The multiprocessing module Python2.6 already does something like what
you are talking about. For example I have used the parallel map of
that module to almost double the speed of a small program of mine.

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Mike Kazantsev
On Sun, 17 May 2009 16:39:38 +0200
Matus  wrote:

> I searches web and python documentation for implementation of pushback
> iterator but found none in stdlib.
> 
> problem:
> 
> when you parse a file, often you have to read a line from parsed file
> before you can decide if you want that line it or not. if not, it would
> be a nice feature to be able po push the line back into the iterator, so
> nest time when you pull from iterator you get this 'unused' line.
>  
...
> 
> proposal:
> =
> as this is (as I suppose) common problem, would it be possible to extend
> the stdlib of python (ie itertools module) with a similar solution so
> one do not have to reinvent the wheel every time pushback is needed?  

Sounds to me more like an iterator with a cache - you can't really pull
the line from a real iterable like generator function and then just push
it back.
If this "iterator" is really a list then you can use it as such w/o
unnecessary in-out operations.

And if you're "pushing back" the data for later use you might just as
well push it to dict with the right indexing, so the next "pop" won't
have to roam thru all the values again but instantly get the right one
from the cache, or just get on with that iterable until it depletes.

What real-world scenario am I missing here?

-- 
Mike Kazantsev // fraggod.net


signature.asc
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Benjamin Kaplan
On Sun, May 17, 2009 at 8:42 AM, Piet van Oostrum  wrote:

> > Edward Grefenstette  (EG) wrote:
>
> >EG> Any attempt to do anything with Tkinter (save import) raises the
> >EG> following show-stopping error:
>
> >EG> "Traceback (most recent call last):
> >EG>   File "", line 1, in 
> >EG>   File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
> >EG> python2.6/lib-tk/Tkinter.py", line 1645, in __init__
> >EG> self._loadtk()
> >EG>   File "/Library/Frameworks/Python.framework/Versions/2.6/lib/
> >EG> python2.6/lib-tk/Tkinter.py", line 1659, in _loadtk
> >EG> % (_tkinter.TK_VERSION, tk_version)
> >EG> RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)"
>
> >EG> As you can see, I'm running the vanilla install python on OS X 10.5.7.
> >EG> Does anyone know how I can fix this? Google searches have yielded
> >EG> results ranging from suggestions it has been fixed (not for me) to
> >EG> recommendations that the user rebuild python against a newer version
> >EG> of libtk (which I have no idea how to do).
>
> >EG> I would greatly appreciate any assistance the community can provide on
> >EG> the matter.
>
> Have you installed Tk version 8.5?
>
> If so, remove it. You might also install the latest 8.4 version.


There were a couple bugs in the 2.6.0 installer that stopped Tkinter from
working and this error message was given by one of them[1]. The python
installer looked in /System/Library before /Library/, so it used the System
Tk. The linker looks in /Library first so it found the user installed Tk and
used that instead. You then get a version mismatch. Try reinstalling Python
(use 2.6.2 if you're not already). That should get it to link with the
proper Tk.

[1] http://bugs.python.org/issue4017

--
> Piet van Oostrum 
> URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
> Private email: [email protected]
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:

> On 2009-05-17, Steven D'Aprano 
> wrote:
>> On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:
>>
>>> From a user point of view I think that adding a 'par' construct to
>>> Python for parallel loops would add a lot of power and simplicity,
>>> e.g.
>>> 
>>> par i in list:
>>> updatePartition(i)
>>> 
>>> There would be no locking and it would be the programmer's
>>> responsibility to ensure that the loop was truly parallel and correct.
>>
>> What does 'par' actually do there?
> 
> My reading of the OP is that it tells the interpreter that it can
> execute any/all iterations of updatePartion(i) in parallel (or
> presumably serially in any order) rather than serially in a strict
> sequence.
> 
>> Given that it is the programmer's responsibility to ensure that
>> updatePartition was actually parallelized, couldn't that be written as:
>>
>> for i in list:
>> updatePartition(i)
>>
>> and save a keyword?
> 
> No, because a "for" loop is defined to execute it's iterations serially
> in a specific order.  OTOH, a "par" loop is required to execute once for
> each value, but those executions could happen in parallel or in any
> order.
> 
> At least that's how I understood the OP.

I can try guessing what the OP is thinking just as well as anyone else, 
but "in the face of ambiguity, refuse the temptation to guess" :)

It isn't clear to me what the OP expects the "par" construct is supposed 
to actually do. Does it create a thread for each iteration? A process? 
Something else? Given that the rest of Python will be sequential (apart 
from explicitly parallelized functions), and that the OP specifies that 
updatePartition still needs to handle its own parallelization, does it 
really matter if the calls to updatePartition happen sequentially?

If it's important to make the calls in arbitrary order, random.shuffle 
will do that. If there's some other non-sequential and non-random order 
to the calls, the OP should explain what it is. What else, if anything, 
does par do, that it needs to be a keyword and statement rather than a 
function? What does it do that (say) a parallel version of map() wouldn't 
do?

The OP also suggested:

"There could also be parallel versions of map, filter and reduce
provided."

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they become 
available. I'm not so sure about filter(), unless you give up the 
requirement that the filtered results occur in the same order as the 
originals.

But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Mike Kazantsev
Somehow, I got the message off the list.

On Sun, 17 May 2009 17:42:43 +0200
Matus  wrote:

> > Sounds to me more like an iterator with a cache - you can't really pull
> > the line from a real iterable like generator function and then just push
> > it back.
> 
> true, that is why you have to implement this iterator wrapper

I fail to see much point of such a dumb cache, in most cases you
shouldn't iterate again and again thru the same sequence, so what's
good hardcoding (and thus, encouraging) such thing will do?

Besides, this wrapper breaks iteration order, since it's cache is LIFO
instead of FIFO, which should rather be implemented with deque instead
of list.

> > If this "iterator" is really a list then you can use it as such w/o
> > unnecessary in-out operations.
> 
> of course, it is not a list. you can wrap 'real' iterator using this
> wrapper (), and voila, you can use pushback method to 'push back' item
> received by next method. by calling next again, you will get pushed back
> item again, that is actually the point.

Wrapper differs from "list(iterator)" in only one thing: it might not
make it to the end of iterable, but if "pushing back" is common
operation, there's a good chance you'll make it to the end of the
iterator during execution, dragging whole thing along as a burden each
time.

> > And if you're "pushing back" the data for later use you might just as
> > well push it to dict with the right indexing, so the next "pop" won't
> > have to roam thru all the values again but instantly get the right one
> > from the cache, or just get on with that iterable until it depletes.
> > 
> > What real-world scenario am I missing here?
> > 
> 
> ok, I admit that that the file was not good example. better example
> would be just any iterator you use in your code.

Somehow I've always managed to avoid such re-iteration scenarios, but
of course, it could be just my luck ;)

-- 
Mike Kazantsev // fraggod.net


signature.asc
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread MRAB

Steven D'Aprano wrote:

On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:


On 2009-05-17, Steven D'Aprano 
wrote:

On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:


From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and correct.

What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it can
execute any/all iterations of updatePartion(i) in parallel (or
presumably serially in any order) rather than serially in a strict
sequence.


Given that it is the programmer's responsibility to ensure that
updatePartition was actually parallelized, couldn't that be written as:

for i in list:
updatePartition(i)

and save a keyword?

No, because a "for" loop is defined to execute it's iterations serially
in a specific order.  OTOH, a "par" loop is required to execute once for
each value, but those executions could happen in parallel or in any
order.

At least that's how I understood the OP.


I can try guessing what the OP is thinking just as well as anyone else, 
but "in the face of ambiguity, refuse the temptation to guess" :)


It isn't clear to me what the OP expects the "par" construct is supposed 
to actually do. Does it create a thread for each iteration? A process? 
Something else? Given that the rest of Python will be sequential (apart 
from explicitly parallelized functions), and that the OP specifies that 
updatePartition still needs to handle its own parallelization, does it 
really matter if the calls to updatePartition happen sequentially?


If it's important to make the calls in arbitrary order, random.shuffle 
will do that. If there's some other non-sequential and non-random order 
to the calls, the OP should explain what it is. What else, if anything, 
does par do, that it needs to be a keyword and statement rather than a 
function? What does it do that (say) a parallel version of map() wouldn't 
do?


The OP also suggested:

"There could also be parallel versions of map, filter and reduce
provided."

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they become 
available. I'm not so sure about filter(), unless you give up the 
requirement that the filtered results occur in the same order as the 
originals.


But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



It can calculate the items in parallel, but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch
But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.


That depends on the operation in question. Addition for example would 
work. My math-skills are a bit too rusty to qualify the exact nature of 
the operation, commutativity springs to my mind.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Luis Alberto Zarrabeitia Gomez

Quoting Mike Kazantsev :

> And if you're "pushing back" the data for later use you might just as
> well push it to dict with the right indexing, so the next "pop" won't
> have to roam thru all the values again but instantly get the right one
> from the cache, or just get on with that iterable until it depletes.
> 
> What real-world scenario am I missing here?

Other than one he described in his message? Neither of your proposed solutions
solves the OP's problem. He doesn't have a list (he /could/ build a list, and
thus defeat the purpose of having an iterator). He /could/ use alternative data
structures, like the dictionary you are suggesting... and he is, he is using his
pushback iterator, but he has to include it over and over.

Currently there is no good "pythonic" way of building a functions that decide to
stop consuming from an iterator when the first invalid input is encountered:
that last, invalid input is lost from the iterator. You can't just abstract the
whole logic inside the function, "something" must leak.

Consider, for instance, the itertools.dropwhile (and takewhile). You can't just
use it like

i = iter(something)
itertools.dropwhile(condition, i)
# now consume the rest

Instead, you have to do this:

i = iter(something)
i = itertools.dropwhile(condition, i) 
# and now i contains _another_ iterator
# and the first one still exists[*], but shouldn't be used
# [*] (assume it was a parameter instead of the iter construct)

For parsing files, for instance (similar to the OP's example), it could be nice
to do:

f = file(something)
lines = iter(f)
parse_headers(lines)
parse_body(lines)
parse_footer(lines)

which is currently impossible.

To the OP: if you don't mind doing instead:

f = file(something)
rest = parse_headers(f)
rest = parse_body(rest)
rest = parse_footer(rest)

you could return itertools.chain([pushed_back], iterator) from your parsing
functions. Unfortunately, this way will add another layer of itertools.chain on
top of the iterator, you will have to hope this will not cause a
performace/memory penalty.

Cheers,

-- 
Luis Zarrabeitia
Facultad de Matemática y Computación, UH
http://profesores.matcom.uh.cu/~kyrie

-- 
Participe en Universidad 2010, del 8 al 12 de febrero de 2010
La Habana, Cuba 
http://www.universidad2010.cu

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Roy Smith
In article <[email protected]>,
 Steven D'Aprano  wrote:

> But reduce()? I can't see how you can parallelize reduce(). By its 
> nature, it has to run sequentially: it can't operate on the nth item 
> until it is operated on the (n-1)th item.

Well, if you're willing to impose the additional constraint that f() must 
be associative, then you could load the items into a tree, and work your 
way up from the bottom of the tree, applying f() pairwise to the left and 
right child of each node, propagating upward.

It would take k1 * O(n) to create the (unsorted) tree, and if all the pairs 
in each layer really could be done in parallel, k2 * O(lg n) to propagate 
the intermediate values.  As long as k2 is large compared to k1, you win.

Of course, if the items are already in some random-access container (such 
as a list), you don't even need to do the first step, but in the general 
case of generating the elements on the fly with an iterable, you do.  Even 
with an iterable, you could start processing the first elements while 
you're still generating the rest of them, but that gets a lot more 
complicated and assuming k2 >> k1, of limited value.

If k2 is about the same as k1, then the whole thing is pointless.

But, this would be something to put in a library function, or maybe a 
special-purpose Python derivative, such as numpy.  Not in the core language.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Gary Herron

MRAB wrote:

Steven D'Aprano wrote:

On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:


On 2009-05-17, Steven D'Aprano 
wrote:

On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:


From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and 
correct.

What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it can
execute any/all iterations of updatePartion(i) in parallel (or
presumably serially in any order) rather than serially in a strict
sequence.


Given that it is the programmer's responsibility to ensure that
updatePartition was actually parallelized, couldn't that be written 
as:


for i in list:
updatePartition(i)

and save a keyword?

No, because a "for" loop is defined to execute it's iterations serially
in a specific order.  OTOH, a "par" loop is required to execute once 
for

each value, but those executions could happen in parallel or in any
order.

At least that's how I understood the OP.


I can try guessing what the OP is thinking just as well as anyone 
else, but "in the face of ambiguity, refuse the temptation to guess" :)


It isn't clear to me what the OP expects the "par" construct is 
supposed to actually do. Does it create a thread for each iteration? 
A process? Something else? Given that the rest of Python will be 
sequential (apart from explicitly parallelized functions), and that 
the OP specifies that updatePartition still needs to handle its own 
parallelization, does it really matter if the calls to 
updatePartition happen sequentially?


If it's important to make the calls in arbitrary order, 
random.shuffle will do that. If there's some other non-sequential and 
non-random order to the calls, the OP should explain what it is. What 
else, if anything, does par do, that it needs to be a keyword and 
statement rather than a function? What does it do that (say) a 
parallel version of map() wouldn't do?


The OP also suggested:

"There could also be parallel versions of map, filter and reduce
provided."

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they 
become available. I'm not so sure about filter(), unless you give up 
the requirement that the filtered results occur in the same order as 
the originals.


But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



It can calculate the items in parallel, but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.


That should read "associative" not "commutative".

For instance A+B+C+D could be calculated sequentially as implied by
 ((A+B)+C)+D
or with some parallelism as implied by
 (A+B)+(C+D)
That's an application of the associativity of addition.

Gary Herron


--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 18:24:34 +0200, Diez B. Roggisch wrote:

>> But reduce()? I can't see how you can parallelize reduce(). By its
>> nature, it has to run sequentially: it can't operate on the nth item
>> until it is operated on the (n-1)th item.
> 
> That depends on the operation in question. Addition for example would
> work. 

You'd think so, but you'd be wrong. You can't assume addition is always 
commutative.

>>> reduce(operator.add, (1.0, 1e57, -1e57))
0.0
>>> reduce(operator.add, (1e57, -1e57, 1.0))
1.0



> My math-skills are a bit too rusty to qualify the exact nature of
> the operation, commutativity springs to my mind.

And how is reduce() supposed to know whether or not some arbitrary 
function is commutative?


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 17:19:15 +0100, MRAB wrote:

>> But reduce()? I can't see how you can parallelize reduce(). By its
>> nature, it has to run sequentially: it can't operate on the nth item
>> until it is operated on the (n-1)th item.
>> 
> It can calculate the items in parallel, 

I don't understand what calculation you are talking about. Let's take a 
simple example:

reduce(operator.sub, [100, 50, 25, 5])  => 100-50-25-5 = 20

What calculations do you expect to do in parallel?


> but the final result must be
> calculated sequence, although if the final operation is commutative then
> some of them could be done in parallel.

But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread MRAB

Steven D'Aprano wrote:

On Sun, 17 May 2009 17:19:15 +0100, MRAB wrote:


But reduce()? I can't see how you can parallelize reduce(). By its
nature, it has to run sequentially: it can't operate on the nth item
until it is operated on the (n-1)th item.

It can calculate the items in parallel, 


I don't understand what calculation you are talking about. Let's take a 
simple example:


reduce(operator.sub, [100, 50, 25, 5])  => 100-50-25-5 = 20

What calculations do you expect to do in parallel?



but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.


But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().



I meant associative not commutative.

I was thinking about calculating the sum of a list of expressions, where
the expressions could be calculated in parallel.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch



My math-skills are a bit too rusty to qualify the exact nature of
the operation, commutativity springs to my mind.


And how is reduce() supposed to know whether or not some arbitrary 
function is commutative?


I don't recall anybody saying it should know that - do you? The OP wants 
to introduce parallel variants, not replace the existing ones.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch
But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().


def reduce(operation, sequence, startitem=None, parallelize=False)

should be enough. Approaches such as OpenMP also don't guess, they use 
explicit annotations.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


threading issue

2009-05-17 Thread anusha k
hi,

i am using pygtk,glade in the front end and postgresql,python-twisted
(xmlrpc) as the back end.My issue is i am trying to add the progress bar in
my application but when the progress bar comes up it is blocking the backend
process.So i started using threading in my application.But when i added that
it is working but i am not able to destroy the window where the progress bar
is present, which is part of main window.

my code is
class ledger(threading.Thread):
"""This class sets the fraction of the progressbar"""

#Thread event, stops the thread if it is set.
stopthread = threading.Event()
 ..
wTree = gtk.glade.XML('gnukhata/main_window.glade','window_progressbar')
window = wTree.get_widget('window_progressbar')
window.set_size_request(300,50)
progressbar = wTree.get_widget('progressbar')
#window.connect('destroy',self.main_quit)
window.show_all()



def run(self):
"""Run method, this is the code that runs while thread is alive."""

#Importing the progressbar widget from the global scope
#global progressbar
course = True
#While the stopthread event isn't setted, the thread keeps going on
while course :
# Acquiring the gtk global mutex
gtk.gdk.threads_enter()
#Setting a random value for the fraction
l=0.1
while l<1:
self.progressbar.pulse()
time.sleep(0.1)
l=l+0.1


queryParams = []
res1=self.x.account.getAllAccountNamesByLedger(queryParams)
for l in range(0,len(res1)):
   ;;
# Releasing the gtk global mutex
gtk.gdk.threads_leave()

#Delaying 100ms until the next iteration
time.sleep(0.1)
course = False
#gtk.main_quit()
global fs
print 'anu'
#Stopping the thread and the gtk's main loop
fs.stop()

#window.destroy()

self.ods.save("Ledger.ods")
os.system("ooffice Ledger.ods")


def stop(self):
"""Stop method, sets the event to terminate the thread's main
loop"""
self.stopthread.set()

*



njoy the share of freedom,
Anusha Kadambala
-- 
http://mail.python.org/mailman/listinfo/python-list


Seeking old post on developers who like IDEs vs developers who like simple languages

2009-05-17 Thread Steve Ferg
A few years ago someone, somewhere on the Web, posted a blog in which
he observed that developers, by general temperament, seem to fall into
two groups.

On the one hand, there are developers who love big IDEs with lots of
features (code generation, error checking, etc.), and rely on them to
provide the high level of support needed to be reasonably productive
in heavy-weight languages (e.g. Java).

On the other hand there are developers who much prefer to keep things
light-weight and simple.  They like clean high-level languages (e.g.
Python) which are compact enough that you can keep the whole language
in your head, and require only a good text editor to be used
effectively.

The author wasn't saying that one was better than the other: only that
there seemed to be this recognizable difference in preferences.

I periodically think of that blog, usually in circumstances that make
me also think "Boy, that guy really got it right".  But despite
repeated and prolonged bouts of googling I haven't been able to find
the article again.  I must be using the wrong search terms or
something.

Does anybody have a link to this article?

Thanks VERY MUCH in advance,
-- Steve Ferg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Matus


Luis Alberto Zarrabeitia Gomez wrote:
> Quoting Mike Kazantsev :
> 
>> And if you're "pushing back" the data for later use you might just as
>> well push it to dict with the right indexing, so the next "pop" won't
>> have to roam thru all the values again but instantly get the right one
>> from the cache, or just get on with that iterable until it depletes.
>>
>> What real-world scenario am I missing here?
> 
> Other than one he described in his message? Neither of your proposed solutions
> solves the OP's problem. He doesn't have a list (he /could/ build a list, and
> thus defeat the purpose of having an iterator). He /could/ use alternative 
> data
> structures, like the dictionary you are suggesting... and he is, he is using 
> his
> pushback iterator, but he has to include it over and over.
> 
> Currently there is no good "pythonic" way of building a functions that decide 
> to
> stop consuming from an iterator when the first invalid input is encountered:
> that last, invalid input is lost from the iterator. You can't just abstract 
> the
> whole logic inside the function, "something" must leak.
> 
> Consider, for instance, the itertools.dropwhile (and takewhile). You can't 
> just
> use it like
> 
> i = iter(something)
> itertools.dropwhile(condition, i)
> # now consume the rest
> 
> Instead, you have to do this:
> 
> i = iter(something)
> i = itertools.dropwhile(condition, i) 
> # and now i contains _another_ iterator
> # and the first one still exists[*], but shouldn't be used
> # [*] (assume it was a parameter instead of the iter construct)
> 
> For parsing files, for instance (similar to the OP's example), it could be 
> nice
> to do:
> 
> f = file(something)
> lines = iter(f)
> parse_headers(lines)
> parse_body(lines)
> parse_footer(lines)
> 

that is basically one of many possible scenarios I was referring to.
other example would be:


iter = Pushback_wrapper( open( 'my.file' ).readlines( ) )
for line in iter:
if is_outer_scope( line ):
'''
do some processing for this logical scope of file. there is 
only fet
outer scope lines
'''
continue

for line in iter:
'''
here we expect 1000 - 2000 lines of inner scope and we do not 
want to
run is_outer_scope()
for every line as it is expensive, so we decided to reiterate
'''
if is_inner_scope( line ):
'''
do some processing for this logical scope of file 
untill outer scope
condition occurs
'''
elif is_outer_scope( line ):
iter.pushback( line )
break
else:
'''flush line'''


> which is currently impossible.
> 
> To the OP: if you don't mind doing instead:
> 
> f = file(something)
> rest = parse_headers(f)
> rest = parse_body(rest)
> rest = parse_footer(rest)
> 
> you could return itertools.chain([pushed_back], iterator) from your parsing
> functions. Unfortunately, this way will add another layer of itertools.chain 
> on
> top of the iterator, you will have to hope this will not cause a
> performace/memory penalty.
> 
> Cheers,
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Terry Reedy

Peter Otten wrote:

Terry Reedy wrote:




If the names of superclasses is resolved when classes are instantiated,
the patching is easy.  If, as I would suspect, the names are resolved
when the classes are created, before the module becomes available to the
importing code, then much more careful and extensive patching would be
required, if it is even possible.  (Objects in tuples cannot be
replaced, and some attributes are not writable.)


It may be sufficient to patch the subclasses:


I was not sure if __bases__ is writable or not.  There is also __mro__ 
to consider.



$ cat my_file.py
class Super(object):
def __str__(self):
return "old"

class Sub(Super):
def __str__(self):
return "Sub(%s)" % super(Sub, self).__str__()

class Other(object):
pass

class SubSub(Sub, Other):
def __str__(self):
return "SubSub(%s)" % super(SubSub, self).__str__()

if __name__ == "__main__":
print Sub()

$ cat main2.py
import my_file
OldSuper = my_file.Super

class NewSuper(OldSuper):
def __str__(self):
return "new" + super(NewSuper, self).__str__()

my_file.Super = NewSuper
for n, v in vars(my_file).iteritems():
if v is not NewSuper:
try:
bases = v.__bases__
except AttributeError:
pass
else:
if OldSuper in bases:
print "patching", n
v.__bases__ = tuple(NewSuper if b is OldSuper else b
for b in bases)


print my_file.Sub()
print my_file.SubSub()
$ python main2.py
patching Sub
Sub(newold)
SubSub(Sub(newold))

Peter



--
http://mail.python.org/mailman/listinfo/python-list


PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Thomas Wouters reminded me of a long-standing idea; I finally
found the time to write it down.

Please comment!

Regards,
Martin

PEP: 384
Title: Defining a Stable ABI
Version: $Revision: 72754 $
Last-Modified: $Date: 2009-05-17 21:14:52 +0200 (So, 17. Mai 2009) $
Author: Martin v. Löwis 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 17-May-2009
Python-Version: 3.2
Post-History:

Abstract


Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI.

Rationale
=

The primary source of ABI incompatibility are changes to the lay-out
of in-memory structures. For example, the way in which string interning
works, or the data type used to represent the size of an object, have
changed during the life of Python 2.x. As a consequence, extension
modules making direct access to fields of strings, lists, or tuples,
would break if their code is loaded into a newer version of the
interpreter without recompilation: offsets of other fields may have
changed, making the extension modules access the wrong data.

In some cases, the incompatibilities only affect internal objects of
the interpreter, such as frame or code objects. For example, the way
line numbers are represented has changed in the 2.x lifetime, as has
the way in which local variables are stored (due to the introduction
of closures). Even though most applications probably never used these
objects, changing them had required to change the PYTHON_API_VERSION.

On Linux, changes to the ABI are often not much of a problem: the
system will provide a default Python installation, and many extension
modules are already provided pre-compiled for that version. If additional
modules are needed, or additional Python versions, users can typically
compile them themselves on the system, resulting in modules that use
the right ABI.

On Windows, multiple simultaneous installations of different Python
versions are common, and extension modules are compiled by their
authors, not by end users. To reduce the risk of ABI incompatibilities,
Python currently introduces a new DLL name pythonXY.dll for each
feature release, whether or not ABI incompatibilities actually exist.

With this PEP, it will be possible to reduce the dependency of binary
extension modules on a specific Python feature release, and applications
embedding Python can be made work with different releases.

Specification
=

The ABI specification falls into two parts: an API specification,
specifying what function (groups) are available for use with the
ABI, and a linkage specification specifying what libraries to link
with. The actual ABI (layout of structures in memory, function
calling conventions) is not specified, but implied by the
compiler. As a recommendation, a specific ABI is recommended for
selected platforms.

During evolution of Python, new ABI functions will be added.
Applications using them will then have a requirement on a minimum
version of Python; this PEP provides no mechanism for such
applications to fall back when the Python library is too old.

Terminology
---

Applications and extension modules that want to use this ABI
are collectively referred to as "applications" from here on.

Header Files and Preprocessor Definitions
-

Applications shall only include the header file Python.h (before
including any system headers), or, optionally, include pyconfig.h, and
then Python.h.

During the compilation of applications, the preprocessor macro
Py_LIMITED_API must be defined. Doing so will hide all definitions
that are not part of the ABI.

Structures
--

Only the following structures and structure fields are accessible to
applications:

- PyObject (ob_refcnt, ob_type)
- PyVarObject (ob_base, ob_size)
- Py_buffer (buf, obj, len, itemsize, readonly, ndim, shape,
  strides, suboffsets, smalltable, internal)
- PyMethodDef (ml_name, ml_meth, ml_flags, ml_doc)
- PyMemberDef (name, type, offset, flags, doc)
- PyGetSetDef (name, get, set, doc, closure)

The accessor macros to these fields (Py_REFCNT, Py_TYPE, Py_SIZE)
are also available to applications.

The following types are available, but opaque (i.e. incomplete):

- PyThreadState
- PyInterpreterState

Type Objects


The structure of type objects is not available to applications;
declaration of "static" type objects is not possible anymore
(for applications using this ABI).
Instead, type objects get created dynamically. To allow an
easy creation of types (in particular, to be able to fill out
function pointers easily), the fo

Re: Adding a Par construct to Python?

2009-05-17 Thread Paul Boddie
On 17 Mai, 14:05, [email protected] wrote:
> From a user point of view I think that adding a 'par' construct to
> Python for parallel loops would add a lot of power and simplicity,
> e.g.
>
> par i in list:
>     updatePartition(i)

You can do this right now with a small amount of work to make
updatePartition a callable which works in parallel, and without the
need for extra syntax. For example, with the pprocess module, you'd
use boilerplate like this:

  import pprocess
  queue = pprocess.Queue(limit=ncores)
  updatePartition = queue.manage(pprocess.MakeParallel
(updatePartition))

(See http://www.boddie.org.uk/python/pprocess/tutorial.html#Map for
details.)

At this point, you could use a normal "for" loop, and you could then
"sync" for results by reading from the queue. I'm sure it's a similar
story with the multiprocessing/processing module.

> There would be no locking and it would be the programmer's
> responsibility to ensure that the loop was truly parallel and correct.

Yes, that's the idea.

> The intention of this would be to speed up Python execution on multi-
> core platforms. Within a few years we will see 100+ core processors as
> standard and we need to be ready for that.

In what sense are we not ready? Perhaps the abstractions could be
better, but it's definitely possible to run Python code on multiple
cores today and get decent core utilisation.

> There could also be parallel versions of map, filter and reduce
> provided.

Yes, that's what pprocess.pmap is for, and I imagine that other
solutions offer similar facilities.

> BUT...none of this would be possible with the current implementation
> of Python with its Global Interpreter Lock, which effectively rules
> out true parallel processing.
>
> See:http://jessenoller.com/2009/02/01/python-threads-and-the-global-inter...
>
> What do others think?

That your last statement is false: true parallel processing is
possible today. See the Wiki for a list of solutions:

http://wiki.python.org/moin/ParallelProcessing

In addition, Jython and IronPython don't have a global interpreter
lock, so you have the option of using threads with those
implementations, too.

Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dirkjan Ochtman
On Sun, May 17, 2009 at 10:54 PM, "Martin v. Löwis"  wrote:
> Excluded Functions
> --
>
> Functions declared in the following header files are not part
> of the ABI:
> - cellobject.h
> - classobject.h
> - code.h
> - frameobject.h
> - funcobject.h
> - genobject.h
> - pyarena.h
> - pydebug.h
> - symtable.h
> - token.h
> - traceback.h

What kind of effect does this have on optimization efforts, for
example all the stuff done by Antoine Pitrou over the last few months,
and the first few results from unladen? Will it mean we won't get to
the good optimizations until 4.0? Or does it just mean unladen swallow
takes longer to come back to trunk (until 4.0) and every extension
author who wants to be compatible with it will basically have the same
burden as now?

Cheers,

Dirkjan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
>> Functions declared in the following header files are not part
>> of the ABI:
>> - cellobject.h
>> - classobject.h
>> - code.h
>> - frameobject.h
>> - funcobject.h
>> - genobject.h
>> - pyarena.h
>> - pydebug.h
>> - symtable.h
>> - token.h
>> - traceback.h
> 
> What kind of effect does this have on optimization efforts, for
> example all the stuff done by Antoine Pitrou over the last few months,
> and the first few results from unladen? 

I fail to see the relationship, so: no effect that I can see.

Why do you think that optimization efforts could be related to
the PEP 384 proposal?

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dirkjan Ochtman
On Mon, May 18, 2009 at 12:07 AM, "Martin v. Löwis"  wrote:
> I fail to see the relationship, so: no effect that I can see.
>
> Why do you think that optimization efforts could be related to
> the PEP 384 proposal?

It would seem to me that optimizations are likely to require data
structure changes, for exactly the kind of core data structures that
you're talking about locking down. But that's just a high-level view,
I might be wrong.

Cheers,

Dirkjan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating temperory files for a web application

2009-05-17 Thread sserrano
I would use a js plotting library, like http://code.google.com/p/flot/

On 8 mayo, 06:26, koranthala  wrote:
> Hi,
>    I am doing web development using Django. I need to create an image
> (chart) and show it to the users - based on some data which user
> selects.
>    My question is - how do I create a temporary image for the user? I
> thought of tempfile, but I think it will be deleted once the process
> is done - which would happen by the time user starts seeing the image.
> I can think of no other option other than to have another script which
> will delete all images based on time of creation.
>    Since python is extensively used for web development, I guess this
> should be an usual scenario for many people here. How do you usually
> handle this?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Dirkjan Ochtman wrote:
> On Mon, May 18, 2009 at 12:07 AM, "Martin v. Löwis"  
> wrote:
>> I fail to see the relationship, so: no effect that I can see.
>>
>> Why do you think that optimization efforts could be related to
>> the PEP 384 proposal?
> 
> It would seem to me that optimizations are likely to require data
> structure changes, for exactly the kind of core data structures that
> you're talking about locking down. But that's just a high-level view,
> I might be wrong.

Ah. It's exactly the opposite: The purpose of the PEP is not to lock
the data structures down, but to allow more flexible evolution of
them - by completely hiding them from extension modules.

Currently, any data structure change must be weighed for its impact
on binary compatibility. With the PEP, changing structures can
be done fairly freely - with the exception of the very few structures
that do get locked down. In particular, the list of header files
that you quoted precisely contains the structures that can be
modified with no impact on the ABI.

I'm not aware that any of the structures that I propose to lock
would be relevant for optimization - but I might be wrong. If so,
I'd like to know, and it would be possible to add accessor functions
in cases where extension modules might still legitimately want to
access certain fields.

Certain changes to the VM would definitely be binary-incompatible,
such as removal of reference counting. However, such a change would
probably have a much wider effect, breaking not just binary
compatibility, but also source compatibility. It would be justified
to call a Python release that makes such a change 4.0.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
I thought of this. I uninstalled Tk from macports, but the same error
crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
don't know where. How can I find out?

Best,
Edward

>
>
> Have you installed Tk version 8.5?
>
> If so, remove it. You might also install the latest 8.4 version.
> --
> Piet van Oostrum 
> URL:http://pietvanoostrum.com[PGP 8DAE142BE17999C4]
> Private email: [email protected]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Dino Viehland wrote:
> Dirkjan Ochtman wrote:
>> It would seem to me that optimizations are likely to require data
>> structure changes, for exactly the kind of core data structures that
>> you're talking about locking down. But that's just a high-level view,
>> I might be wrong.
>>
> 
> 
> In particular I would guess that ref counting is the biggest issue here.
> I would think not directly exposing the field and having inc/dec ref
> Functions (real methods, not macros) for it would give a lot more
> ability to change the API in the future.

In the context of optimization, I'm skeptical that introducing functions
for the reference counting would be useful. Making the INCREF/DECREF
macros functions just in case the reference counting goes away is IMO
an unacceptable performance cost.

Instead, such a change should go through the regular deprecation
procedure and/or cause the release of Python 4.0.

> It also might make it easier for alternate implementations to support
> the same API so some modules could work cross implementation - but I
> suspect that's a non-goal of this PEP :).

Indeed :-) I'm also skeptical that this would actually allow
cross-implementation modules to happen. The list of functions that
an alternate implementation would have to provide is fairly long.

The memory management APIs in particular also assume a certain layout
of Python objects in general, namely that they start with a header
whose size is a compile-time constant. Again, making this more flexible
"just in case" would also impact performance, and probably fairly badly
so.

> Other fields directly accessed (via macros or otherwise) might have similar
> problems but they don't seem as core as ref counting.

Access to the type object reference is probably similar. All the other
structs are used "directly" in C code, with no accessor macros.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dino Viehland
Dirkjan Ochtman wrote:
>
> It would seem to me that optimizations are likely to require data
> structure changes, for exactly the kind of core data structures that
> you're talking about locking down. But that's just a high-level view,
> I might be wrong.
>


In particular I would guess that ref counting is the biggest issue here.
I would think not directly exposing the field and having inc/dec ref
Functions (real methods, not macros) for it would give a lot more
ability to change the API in the future.

It also might make it easier for alternate implementations to support
the same API so some modules could work cross implementation - but I
suspect that's a non-goal of this PEP :).

Other fields directly accessed (via macros or otherwise) might have similar
problems but they don't seem as core as ref counting.
-- 
http://mail.python.org/mailman/listinfo/python-list


http://orbited.org/ - anybody using it?

2009-05-17 Thread Aljosa Mohorovic
can anybody comment on http://orbited.org/ ?
is it an active project? does it work?

Aljosa Mohorovic
-- 
http://mail.python.org/mailman/listinfo/python-list


how to verify SSL certificate chain - M2 Crypto library?

2009-05-17 Thread skrobul
Hi,

is there any simple way to do SSL certificate chain validation using
M2Crypto or any other library ?

Basically what I want to achieve is to be able to say if certificate
chain contained in 'XYZ.pem' file is issued by known CA (list of
common root-CA's certs should be loaded from separate directory).
Right now I do it by spawning command 'openssl verify -CApath
 XYZ.pem' and it works. However I think that there must
be a simpler way.

 I've spent last few hours trying to go through M2Crypto sources and
API "documentation" but the only possible way that I've found is
spawning separate server thread listening on some port, and connecting
just to verify if cert chain is valid, but going this way is at least
not right. The other approach which I've tried is using low-level
function m2.X509_verify() but it does not work as I expect. It returns
0 (which means valid) even if CA certificate is not known.

Any suggestions / tips ?

thanks,
Marek Skrobacki
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 20:34:00 +0200, Diez B. Roggisch wrote:

>>> My math-skills are a bit too rusty to qualify the exact nature of the
>>> operation, commutativity springs to my mind.
>> 
>> And how is reduce() supposed to know whether or not some arbitrary
>> function is commutative?
> 
> I don't recall anybody saying it should know that - do you? The OP wants
> to introduce parallel variants, not replace the existing ones.

Did I really need to spell it out? From context, I'm talking about the 
*parallel version* of reduce().



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Michael Foord

Martin v. Löwis wrote:

Dino Viehland wrote:
  

Dirkjan Ochtman wrote:


It would seem to me that optimizations are likely to require data
structure changes, for exactly the kind of core data structures that
you're talking about locking down. But that's just a high-level view,
I might be wrong.

  

In particular I would guess that ref counting is the biggest issue here.
I would think not directly exposing the field and having inc/dec ref
Functions (real methods, not macros) for it would give a lot more
ability to change the API in the future.



In the context of optimization, I'm skeptical that introducing functions
for the reference counting would be useful. Making the INCREF/DECREF
macros functions just in case the reference counting goes away is IMO
an unacceptable performance cost.

Instead, such a change should go through the regular deprecation
procedure and/or cause the release of Python 4.0.

  

It also might make it easier for alternate implementations to support
the same API so some modules could work cross implementation - but I
suspect that's a non-goal of this PEP :).



Indeed :-) I'm also skeptical that this would actually allow
cross-implementation modules to happen. The list of functions that
an alternate implementation would have to provide is fairly long.

  


Just in case you're unaware of it; the company I work for has an open 
source project called Ironclad. This *is* a reimplementation of the 
Python C API and gives us binary compatibility with [some subset of] 
Python C extensions for use from IronPython.


http://www.resolversystems.com/documentation/index.php/Ironclad.html

It's an ambitious project but it is now at the stage where 1000s of the 
Numpy and Scipy tests pass when run from IronPython. I don't think this 
PEP impacts the project, but it is not completely unfeasible for the 
alternative implementations to do this.


In particular we have had to address the issue of the GIL and extensions 
(IronPython has no GIL) and reference counting (which IronPython also 
doesn't) use.


Michael Foord



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog


--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Kevin Walzer

Edward Grefenstette wrote:

I thought of this. I uninstalled Tk from macports, but the same error
crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
don't know where. How can I find out?

Best,
Edward



Look in /Library/Frameworks...

Kevin Walzer
Code by Kevin
http://www.codebykevin.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread James Y Knight


On May 17, 2009, at 4:54 PM, Martin v. Löwis wrote:

Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI.



It seems like a good ideal to strive for.

But I think this is too strong a promise. IMO it would be better to  
say that ABI compatibility across releases is a goal. If someone does  
make a change that breaks the ABI, I'd expect whomever is proposing it  
to put forth a fairly strong argument towards why it's a worthwhile  
change. But it should be possible and allowed, given the right  
circumstances. Because I think it's pretty much inevitable that it  
*will* need to happen, sometime.


(of course there will need to be ABI tests, so that any potential ABI  
breakages are known about when they occur)


Python is much more defined by its source language than its C  
extension API, so tying the python major version number to the C ABI  
might not be the best idea from a "marketing" standpoint. (I can see  
it now..."Python 4.0 major new features: we changed the C method  
definition struct layout incompatibly" :)


James
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 20:36:36 +0200, Diez B. Roggisch wrote:

>> But reduce() can't tell whether the function being applied is
>> commutative or not. I suppose it could special-case a handful of
>> special cases (e.g. operator.add for int arguments -- but not floats!)
>> or take a caller- supplied argument that tells it whether the function
>> is commutative or not. But in general, you can't assume the function
>> being applied is commutative or associative, so unless you're willing
>> to accept undefined behaviour, I don't see any practical way of
>> parallelizing reduce().
> 
> def reduce(operation, sequence, startitem=None, parallelize=False)
> 
> should be enough. Approaches such as OpenMP also don't guess, they use
> explicit annotations.

It would be nice if the OP would speak up and tell us what he intended, 
so we didn't have to guess what he meant. We're getting further and 
further away from his original suggestion of a "par" loop.

If you pass parallize=True, then what? Does it assume that operation is 
associative, or take some steps to ensure that it is? Does it guarantee 
to perform the operations in a specific order, or will it potentially 
give non-deterministic results depending on the order that individual 
calculations come back?

As I said earlier, parallelizing map() sounds very plausible to me, but 
the approaches that people have talked about for parallelizing reduce() 
so far sound awfully fragile and magically to me. But at least I've 
learned one thing: given an associative function, you *can* parallelize 
reduce using a tree. (Thanks Roy!)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Ned Deily
In article 
,
 Edward Grefenstette  wrote:
> I thought of this. I uninstalled Tk from macports, but the same error
> crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
> don't know where. How can I find out?

Look in /Library/Frameworks for Tcl.framework and Tk.framework.  You can 
safely delete those if you don't need them.  But also make sure you 
update to the latest 2.6 (currently 2.6.2) python.org version; as noted, 
the original 2.6 python.org release had issues with user-installed Tcl 
and Tk frameworks.

-- 
 Ned Deily,
 [email protected]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Thanks to Kevin and Ned for the pointers.
The question is now this. Running find tells me I have tk.h in the
following locations:
===
/Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/Developer/SDKs/MacOSX10.4u.sdk/usr/include/tk.h
/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/Developer/SDKs/MacOSX10.5.sdk/usr/include/tk.h
/Library/Frameworks/Tk.framework/Versions/8.5/Headers/tk.h
/System/Library/Frameworks/Tk.framework/Versions/8.4/Headers/tk.h
/usr/include/tk.h
/usr/local/WordNet-3.0/include/tk/tk.h
===

This seams to entail that the Tk 8.4 framework seems to be installed
in
===
/Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/
/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/System/Library/Frameworks/Tk.framework/Versions/8.4/Headers/tk.h
===

Whereas Tk 8.5 is installed in:
===
/Library/Frameworks/Tk.framework/Versions/8.5/
===

Which ones should I delete? Should I remove all the other tk.h files?

Sorry if these are rather dumb questions, but I really do appreciate
the help.

Best,
Edward

On May 18, 1:09 am, Ned Deily  wrote:
> In article
> ,
>  Edward Grefenstette  wrote:
>
> > I thought of this. I uninstalled Tk from macports, but the same error
> > crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
> > don't know where. How can I find out?
>
> Look in /Library/Frameworks for Tcl.framework and Tk.framework.  You can
> safely delete those if you don't need them.  But also make sure you
> update to the latest 2.6 (currently 2.6.2) python.org version; as noted,
> the original 2.6 python.org release had issues with user-installed Tcl
> and Tk frameworks.
>
> --
>  Ned Deily,
>  [email protected]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Kevin Walzer

Edward Grefenstette wrote:



Whereas Tk 8.5 is installed in:
===
/Library/Frameworks/Tk.framework/Versions/8.5/
===



Delete this one if you want to ensure that Python sees 8.4.

--
Kevin Walzer
Code by Kevin
http://www.codebykevin.com
--
http://mail.python.org/mailman/listinfo/python-list


Generating Tones With Python

2009-05-17 Thread Adam Gaskins
I am pretty sure this shouldn't be as hard as I'm making it to be, but 
how does one go about generating tones of specific frequency, volume, and 
L/R pan? I've been digging around the internet for info, and found a few 
examples. One was with gstreamer, but I can't find much in the 
documentation to explain how to do this. Also some people said to use 
tksnack snack, but the one example I found to do this didn't do  anything 
on my machine, no error or sounds. I'd like this to be cross platform, 
but at this point I just want to do it any way I can.

Thanks,
-Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Ned Deily
In article 
,
 Edward Grefenstette  wrote:
> Bingo! Updating to Python 6.2.2 did the trick (I had 6.2). I just had
> to relink the /usr/bin/python to the Current directory in /Library/
> Frameworks/Python.framework/Versions/ and everything worked without
> deletions etc. Thanks for your help, everyone!

Glad that helped but beware: changing /usr/bin/python is not 
recommended.  That link (and everything else in /usr/bin) is maintained 
by Apple and should always point to the OSX-supplied python at
/System/Library/Python.framework/Versions/2.5/bin/python

By default, the python.org installers create links at 
/usr/local/bin/python and /usr/local/bin/python2.6; use one of those 
paths to get to the python.org 2.6 or ensure /usr/local/bin comes before 
/usr/bin on your $PATH.

-- 
 Ned Deily,
 [email protected]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Bingo! Updating to Python 6.2.2 did the trick (I had 6.2). I just had
to relink the /usr/bin/python to the Current directory in /Library/
Frameworks/Python.framework/Versions/ and everything worked without
deletions etc. Thanks for your help, everyone!

Best,
Edward
-- 
http://mail.python.org/mailman/listinfo/python-list


Python mail truncate problem

2009-05-17 Thread David
Hi,

I am writing Python script to process e-mails in a user's mail
account. What I want to do is to update that e-mail's Status to 'R'
after processing it, however, the following script truncates old e-
mails even though it updates that e-mail's Status correctly. Anybody
knows how to fix this?

Thanks so much.

  fp = '/var/spool/mail/' + user
mbox = mailbox.mbox(fp)

for key, msg in mbox.iteritems():
flags = msg.get_flags()

if 'R' not in flags:
# now process the e-mail
# now update status
msg.add_flag('R' + flags)
mbox[key] = msg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: http://orbited.org/ - anybody using it?

2009-05-17 Thread alex23
On May 18, 9:14 am, Aljosa Mohorovic 
wrote:
> can anybody comment onhttp://orbited.org/?
> is it an active project? does it work?

I have no idea about your second question but looking at PyPI,the
module was last updated on the 9th of this much, so I'd say it's very
much an active project:

http://pypi.python.org/pypi/orbited/0.7.9
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating Tones With Python

2009-05-17 Thread Matus
try http://www.pygame.org, as far as I remember there is a way to
generate sound arrays though not sure aboout the pan

m

Adam Gaskins wrote:
> I am pretty sure this shouldn't be as hard as I'm making it to be, but 
> how does one go about generating tones of specific frequency, volume, and 
> L/R pan? I've been digging around the internet for info, and found a few 
> examples. One was with gstreamer, but I can't find much in the 
> documentation to explain how to do this. Also some people said to use 
> tksnack snack, but the one example I found to do this didn't do  anything 
> on my machine, no error or sounds. I'd like this to be cross platform, 
> but at this point I just want to do it any way I can.
> 
> Thanks,
> -Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Which C compiler?

2009-05-17 Thread Jive Dadson
I am using Python 2.4.  I need to make a native Python extension for 
Windows XP.  I have both VC++ 6.0 and Visual C++ 2005 Express Edition. 
Will VC++ 6.0 do the trick?  That would be easier for me, because the 
project is written for that one.  If not, will the 2005 compiler do it?


Thanks much,
"Jive"
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
>>> It also might make it easier for alternate implementations to support
>>> the same API so some modules could work cross implementation - but I
>>> suspect that's a non-goal of this PEP :).
>>> 
>>
>> Indeed :-) I'm also skeptical that this would actually allow
>> cross-implementation modules to happen. The list of functions that
>> an alternate implementation would have to provide is fairly long.
>>
>>   
> 
> Just in case you're unaware of it; the company I work for has an open
> source project called Ironclad.

I was unaware indeed; thanks for pointing this out.

IIUC, it's not just an API emulation, but also an ABI emulation.

> In particular we have had to address the issue of the GIL and extensions
> (IronPython has no GIL) and reference counting (which IronPython also
> doesn't) use.

I think this somewhat strengthens the point I was trying to make: An
alternate implementation that tries to be API compatible has to consider
so many things that it is questionable whether making Py_INCREF/DECREF
functions would be any simplification.

So I just ask:
a) Would it help IronClad if it could restrict itself to PEP 384
   compatible modules?
b) Would further restrictions in the PEP help that cause?

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating Tones With Python

2009-05-17 Thread John O'Hagan
On Mon, 18 May 2009, Adam Gaskins wrote:
> I am pretty sure this shouldn't be as hard as I'm making it to be, but
> how does one go about generating tones of specific frequency, volume, and
> L/R pan? I've been digging around the internet for info, and found a few
> examples. One was with gstreamer, but I can't find much in the
> documentation to explain how to do this. Also some people said to use
> tksnack snack, but the one example I found to do this didn't do  anything
> on my machine, no error or sounds. I'd like this to be cross platform,
> but at this point I just want to do it any way I can.
[...]

I've done this using the subprocess module to call the sox program (which has 
a built-in synth to generate customisable tones, and can play sound files) or 
using the socket module to send control messages to a running instance of the 
fluidsynth program (which can generate sound using soundfonts). The former is 
very simple, the latter is very responsive to quick changes. I think both 
programs are cross-platform.

HTH,

John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating Tones With Python

2009-05-17 Thread Tim Harig
On 2009-05-18, Adam Gaskins  wrote:
> I am pretty sure this shouldn't be as hard as I'm making it to be, but 
> how does one go about generating tones of specific frequency, volume, and 
> L/R pan? I've been digging around the internet for info, and found a few 

This can be done with SDL which would be my first suggestion.
There doesn't seem to be a direct Python SDL module but pygame seems to
encompass the SDL interface.  If that doesn't work for you might be able
to write python wrappers and the C SDL functions.

A second option, on Windows, may be to interface DirectX.  This would not
be cross platform and I don't personally know anything about it; but, it
should be possible to work with DirectX.  I found this with a Google
search:

http://directpython.sourceforge.net/


Finally, if all else fails you can generate the PCM directly and pipe it to
an external interface.  I have never actually written anything that
actually output sounds directly; but, I have written a module to generate
morse tones wave files:

info:
http://ilthio.net/page.cgi?doc=n20
script:
http://ilthio.net/page.cgi?txt=cw.py

Look inside of the oscillator class.  I used an 8bit samplewidth with
1 channel to save space (I originally did implement it in 16bit mono).
To do more complex sounds, you will probably need to a 16 bit channel.
Note that 8bit wave files use unsigned integer values, little endian while
16 bit wave files use signed integers, little endian, two's compliment.
For L/R pan, you will also need to create the second channel waveform.  You
can easily find good references for the different wave file PCM formats
with a search engine query.

If the SDL interface doesn't work for you, then you might be able to
generate a PCM format and pipe it to an external player.  In this scenerio
you would likely ignore actually creating the wave file.  You would instead
just create the raw PCM format inside of the array container and then
pipe bits of it out to the wave player program.  Without the format data
contained in the RIFF header of the wave file, you will need to inform
the player as to exactly what format you will be feeding it.  Most Unix
players have command line options that will allow you to specify the
format.  I am not sure whether Windows based players allow similar
options.  Under Linux, you could pipe the stream directly to the sound
card device inteface file in the /dev filesystem if you know what bitrate
the soundcard uses internally.

This would likely require a second process or second thread to make sure
that you can feed the PCM output in real time.
-- 
http://mail.python.org/mailman/listinfo/python-list