Re: Does Python really follow its philosophy of "Readability counts"?

2009-01-12 Thread bieffe62
On 12 Gen, 00:02, Paul Rubin  wrote:
> Carl Banks  writes:
> > and where it was manipulated for that matter.
>
> > This criticism is completely unfair.  Instance variables have to be
> > manipulated somewhere, and unless your object is immutable, that is
> > going to happen outside of __init__.  That's true in Java, C++, and
> > pretty much any other language.
>
> The criticism is very valid.  Some languages do support immutable
> variables (e.g. "final" declarations in Java, "const" in C++, or
> universal immutability in pure functional languages) and they do so
> precisely for the purpose of taming the chaos of uncontrolled
> mutation.  It would be great if Python also supported immutability.
>
> > I'm not going to argue that this doesn't hurt readability, because it
> > does (though not nearly as much as you claim).  But there are other
> > considerations, and in this case the flexibility of defining
> > attributes outside __init__ is worth the potential decrease in
> > readability.
>
> There are cases where this is useful but they're not terribly common.
> I think it would be an improvement if creating new object attributes
> was by default not allowed outside the __init__ method.  In the cases
> where you really do want to create new attributes elsewhere, you'd
> have to explicitly enable this at instance creation time, for example
> by inheriting from a special superclass:
>
>    class Foo (DynamicAttributes, object): pass
>

You cannot do that, but you can establish a fixed set of attributes by
defining
the __slot__ class variable.


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Does Python really follow its philosophy of "Readability counts"?

2009-01-12 Thread bieffe62
On 12 Gen, 14:45, Paul Rubin  wrote:
> [email protected] writes:
> > >    class Foo (DynamicAttributes, object): pass
>
> > You cannot do that, but you can establish a fixed set of attributes by
> > defining the __slot__ class variable.
>
> That is not what __slot__ is for.


Really? It seems to work:

>>> class A(object):
... __slots__ = ('a', 'b')
... def __init__(self): self.not_allowed = 1
...
>>> a = A()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in __init__
AttributeError: 'A' object has no attribute 'not_allowed'
>>>

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible bug in Tkinter - Python 2.6

2009-01-15 Thread bieffe62
On 15 Gen, 11:30, "Eric Brunel"  wrote:
> Hi all,
>
> I found a behaviour that might be a bug in Tkinter for Python 2.6. Here is  
> the script:
>
> -
>  from Tkinter import *
>  from tkMessageBox import *
>  from tkFileDialog import *
>
> root = Tk()
>
> def ask_file():
>    file_name = askopenfilename()
>    print file_name
>
> def ask_confirm():
>    answer = askyesno()
>    print answer
>
> Button(root, text='Ask file', command=ask_file).pack()
> Button(root, text='Ask confirm', command=ask_confirm).pack()
>
> root.mainloop()
> -
>
> Scenario:
> - Run the script.
> - Click the 'Ask confirm' button and answer 'Yes'; it should print True,  
> which is the expected answer.
> - Click the 'Ask file' button, select any file and confirm.
> - Click the 'Ask confirm' button and answer 'Yes'.
>
> At the last step, the script prints 'False' for me, which is quite wrong.  
> Can anybody confirm this?
>
> I reproduced this problem on Linux Fedora Core 4 and Suse Enterprise  
> Server 9, and on Solaris 8 for Sparc and Solaris 10 for Intel. However, it  
> seems to work correctly on Windows 2000. I could only test with Python  
> 2.6, and not 2.6.1. But I didn't see any mention of this problem in the  
> release notes.
>
> And BTW, if this is actually a bug, where can I report it?
>
> TIA
> --
> python -c "print ''.join([chr(154 - ord(c)) for c in  
> 'U(17zX(%,5.zmz5(17l8(%,5.Z*(93-965$l7+-'])"

It works here (no bug), python 2.6.1 on Windows XP.
If it is a bag, maybe it is platform-specific ...


Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Slow Queue.queue? (was: slow network)

2009-01-15 Thread bieffe62
On 15 Gen, 10:22, Laszlo Nagy  wrote:
> > then the speed goes up to 64 messages/sec on windows and 500
> > messages/sec on Linux.
>
> Finally I could reach 1500 messages/sec without using the queue. If I
> comment out one line (use the queue instead of direct write into socket)
> then speed decreases to 40-60 messages/sec. I don't understand why the
> slow version is slower by a factor of 40?
>
> Fast version:
>
>     def send_message(self,sender,recipient,msgtype,body,timeout=3600):
>
> self.write_str(self.serializer.serialize([sender,recipient,msgtype,body]))
>
> Slow version:
>
>     def send_message(self,sender,recipient,msgtype,body,timeout=3600):
>         self.outgoing.put(self.serializer.serialize([
>             sender,recipient,msgtype,body
>         ]),1,timeout)
>
> plus this method, executed in a different thread:
>
>     def _process_outgoing(self):
>         try:
>             while not self.stop_requested.isSet():
>                 data_ok = False
>                 while not self.stop_requested.isSet():
>                     try:
>                         data = self.outgoing.get(1,1)
>                         data_ok = True
>                         break
>                     except Queue.Empty:
>                         pass
>                 if data_ok:
>                     self.write_str(data)
>         except Exception, e:
>             if self.router:
>                 if not isinstance(e,TransportClosedError):
>                     self.router.logger.error(dumpexc(e))
>                 self.router.unregister_endpoint(self)
>             self.shutdown()
>             raise SystemExit(0)


I would try something like this inside _process_outgoing:

while not self.stop_requested.isSet():
data_ok = False
while not self.stop_requested.isSet():
if not self.outgoing.empty():
  try:
data = self.outgoing.get(True,0.1)
data_ok = True
break
  except Queue.Empty:
pass
else:
 time.sleep(0.1) # maybe, if the thread usess
too much CPU

The hypotesis I made for this suggestion are:

- if the queue is found empty, the queue.get could keep the global
interpreter lock until a message arrive
  (blocking also the thread that put the message in the queue)

- if the queue is found empty, the handling of the exception can slows
down the execution

Not sure they are good guesses, because at the rate of your messages
the queue should be almost always full,
and most probably the implementation of Queue.get is smarted than
myself :-). ANyway, it is worth a try ...

Also, is you are using fixed-length queues (the ones that make the
sender wait if the queue is full), try to increase
the queue size, or to use an infinite-size queue.

HTH

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Client Socket Connection to Java server

2009-01-16 Thread bieffe62
On 16 Gen, 08:10, TechieInsights  wrote:
> I am having problems with a socket connection to a Java server.  In
> java I just open the socket, pass the length and then pass the bits
> across the socket.
>
> I created a socket object:
>
> import socket
>
> class MySocket:
>         def __init__(self, host='localhost', port = 28192, buffsize = 1024):
>                 socket.setdefaulttimeout(10)
>
>                 self.host = host
>                 self.port = port
>                 self.buffsize = buffsize
>                 self.socket = socket.socket(socket.AF_INET, 
> socket.SOCK_STREAM)
>                 self.socket.connect((host, port))
>
>         def send(self, data):
>                 self.socket.send(data)
>
>         def receive(self):
>                 return self.socket.recv(self.buffsize)
>
>         def sendAndReceive(self, data):
>                 self.send(data)
>                 return self.receive()
>
>         def close(self):
>                 self.socket.close()
>
> But the java server gives the error:
> WARNING:  Message length invalid.  Discarding
>
> The data is of type string (xml).  Am I doing something wrong?  I know
> you have to reverse the bits when communicating from C++ to Java.
> Could this be the problem? I figured it would not because it said the
> length was invalid.  I just started looking at python sockets
> tonight... and I don't have a real deep base with socket connections
> as it is... any help would be greatly appreciated.
>
> Greg

What is the server protocol? What are you sending? Saying 'xml' is not
enough to understand the problem ...
byte order could be a problem only if the message include binary
representations
of 2-bytes or 4-bytes integer. With XML this should not be the case

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: thread. question

2009-02-09 Thread bieffe62
On Feb 9, 2:47 pm, Tim Wintle  wrote:
> Hi,
> This is my first post here - google hasn't helped much but sorry if this
> has been asked before.
>
> I've been wondering about some of the technicalities of locks in python
> (2.4 and 2.5 mainly).
>
> I'm using the "old" thread module as (1) I prefer the methods and (2) It
> should be a tiny bit faster.
>
> As far as I can see, the thread module is fairly much a direct wrapper
> around "PyThread_allocate_lock" etc. - defined in
> "thread_pthread.h" (took me ages to find the definitions in a header
> file!), so as far as I can tell the implementation is very closely
> wrapped around the system's threading implementation.
>
> Does that mean that every python thread is actually a real separate
> thread (for some reason I thought they were implemented in the
> interpreter)?
>

I've hever seen the C implementation of python thread, but I know that
python threads
are real OS threads, not 'green threads' inside the interpreter.


> Does that also mean that there is very little extra interpreter-overhead
> to using locks (assuming they very infrequently block)? Since if you use
> the thread module direct the code doesn't seem to escape from C during
> the acquire() etc. or am I missing something important?
>

One significant difference with doing threads in lower level languages
is that in Python the single interpreter instructions
are atomic. This make things a bit safer but also less reactive. It is
especially bad if you use C modules which have function
calls which take long time (bit AFAIK most of IO standard modules have
work around for it),
because your program will not change thread until the C function
returns.
The reason - or one of the reasons - for this is that the python
threads share the interpreter, which is locked
by a thread before execution of a statement and released at the end
(  this is the 'Global Interpreter Lock' or GIL of
which you probably found many references if you googled for python
threads ). Among the other things, this seem to mean that
a multi-threaded python application cannot take full advantage of
multi-core CPUs (but the detailed reason for that escapes me...).

For these reasons, on this list, many suggest that if you have real-
time requirements, probably you should consider using
sub-processes instead of threads.
Personally I have used  threads a few times, although always with
very  soft real-time requirements, and I have never been bitten by the
GIL (meaning that anyway I was able top meet my timing requirements).
So, to add another achronym to my post, YMMV.


> Thanks,
>
> Tim Wintle

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Threads in PyGTK: keep typing while ping-ing

2009-02-16 Thread bieffe62
On 16 Feb, 14:47, Mamahita Sela  wrote:
> Dear All,
>
> I have read several howtos for threading in PyGTK. I have tried some but with 
> no luck.
>
> What i want is: i can keep typing in gtk.TextView while periodically doing 
> ping some hosts.
>
> I think, the thread part is working since i can ping so fast (even for not 
> reply host, around 3 s for 27 no-reply-host). But, when the ping action is 
> running, i can not typing in textview. What i type while pinging will show up 
> after ping action is done.
>
> (I am using 2.5.1, pygtk 2.10.6 on Linux x86)
>
> This is my code:
> import threading
> import commands
> import gtk
> import gobject
>
> gtk.gdk.threads_init()
>
> class PingHost(threading.Thread):
>     def __init__(self, host):
>         threading.Thread.__init__(self)
>         self.host = host
>         self.result = ()
>     def run(self):
>         self.result = self.host, commands.getstatusoutput('ping %s -c1' 
> %(self.host))[0]
>
> class Main:
>     def __init__(self):
>         self.win = gtk.Window()
>         self.win.connect('destroy', gtk.main_quit)
>         #
>         self.textb = gtk.TextBuffer()
>         self.textv = gtk.TextView(self.textb)
>         self.textv.set_size_request(500, 500)
>         #
>         self.win.add(self.textv)
>         self.win.show_all()
>         #
>         gobject.timeout_add(5000, self.do_ping)
>
>         def do_ping(self):
>         all_threads = []
>         #
>         for h in range(100, 105):
>             host = '192.168.0.%d' %(h)
>             #
>             worker = PingHost(host)
>             worker.start()
>             all_threads.append(worker)
>         #
>         for t in all_threads:
>             t.join()
>             print t.result
>         #
>         return True
>
> if __name__ == '__main__':
>     app = Main()
>     gtk.main()
>
> Any help would be appreciated :)
> Best regards,
> M

As you have been already told, join() blocks until the thread is
terminated. and you should avoid that.
A simple fix could by add a timeout to t.join, doing something like :
t.join(JOIN_TIMEOUT);
if not t.isAlive():
print t.result

Anywway, starting a thread for each ping is not very efficient, so you
could do better by having a 'ping server thread'
that repeatedly performs the following steps:
- accepts ping command on a queue.Queue
- ping the target and waits for the result
- publish the result on another queue.Queue

At this point, the do_ping action should:
  - send a ping command to the oping server
  - check (in non blocking mode) if there is any result on the ping
results queue
  - display the result, if any

Another big speedup would be to avoid spawning a subprocess fro each
ping and instead
using directly sockets to send ICMP packets, using recipes like this :

http://code.activestate.com/recipes/409689/

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get file name from file handle

2009-02-17 Thread bieffe62
On Feb 17, 9:21 am, loial  wrote:
> Is there anyway, having been passed a file handle, to get the
> filename?
>
> I am assuming not, but thought I would ask

If by file handle you mean the object returned by 'file' and 'open'
functions, it has a name attribute.
If by file handle you mean the file descriptor, i.e. the integer used
for low level I/O, then there is no
way I know of. I believe that number is an index in an array of 'file
descriptors' somewhere inside the
C library ( below python interpreter level ), but I don't know if/how
an user program can access it.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Threads in PyGTK: keep typing while ping-ing

2009-02-18 Thread bieffe62
On 17 Feb, 02:53, Mamahita Sela  wrote:
> Dear FB,
>
> > As you have been already told, join() blocks until the
> > thread is
> > terminated. and you should avoid that.
> > A simple fix could by add a timeout to t.join, doing
> > something like :
> >         t.join(JOIN_TIMEOUT);
> >         if not t.isAlive():
> >             print t.result
>
> Thanks for your great sample. And, thank you for your tips on speedup the 
> ping stuff.
>
> However, i still have problem when adding timeout to join().
> - It works, but only print out thread result that is not alive, as we put in 
> the code. So, i never get result from unsuccessful ping action, which might 
> take more time than timeout specified.
> - I still have little delay in GUI. Is it possible to make it smooth in GUI 
> operation? When i put more timeout (say: 0.5 from 0.2), i will have to wait  
> longer.
>
> PS: thanks for pointing out inefficient ping action :) I will follow the 
> tips. But, another time I might need to do something else, like downloading 
> files, or checking for serial devices. So, this thread thing, imho, is very 
> important.
>
> Any help would be very appreciated.
> M.


You could make all_threads an attribute of main instance, and use it
to keep track of thread still alive. Therefore your
do_ping code shoulds be somethiong like this (NOT TESTED):

   for h in range(100, 105):
host = '192.168.0.%d' %(h)
#
worker = PingHost(host)
worker.start()
self.all_threads.append(worker)
#
for t in self.all_threads:
   t.join(JOIN_TIMEOUT);
   if not t.isAlive():
print t.result
self.all_threads.remove(t)

#
return True


In this way, you could use a small value for JOIN_TIMEOUT and make the
GUI more responsive. The thread which are not
found completed when do_ping is called will stay in self.all_threads
and will be tested next time.
I did not check the documentation, but assumed that
gobject.timeout_add installs a _periodic_ timer. If it is not
so, you have to reinstall the timer at the end of do_ping, if
self.all_threads is not empty.
And don't forget to initialize self.all_threads in __init__ ;)

HTH

Ciao
-
FB





--
http://mail.python.org/mailman/listinfo/python-list


Re: Instance attributes vs method arguments

2008-11-25 Thread bieffe62
On 25 Nov, 08:27, John O'Hagan <[EMAIL PROTECTED]> wrote:
> Is it better to do this:
>
> class Class_a():
>         def __init__(self, args):
>                 self.a = args.a        
>                 self.b = args.b
>                 self.c = args.c
>                 self.d = args.d
>         def method_ab(self):
>                 return self.a + self.b
>         def method_cd(self):            
>                 return self.c + self.d
>
> or this:
>
> class Class_b():
>         def method_ab(self, args):
>                 a = args.a
>                 b = args.b
>                 return a + b
>         def method_cd(self, args)      
>                 c = args.c
>                 d = args.d
>                 return c + d
>
> ?
>
> Assuming we don't need access to the args from outside the class,
> is there anything to be gained (or lost) by not initialising attributes that
> won't be used unless particular methods are called?
>
> Thanks,
>
> John O'Hagan

If 'args' is an object of some class which has the attribute a,b,c,d,
why don't you just add
method_ab and method_cd to the same class, either directly or by
sibclassing it?

If for some reason you can't do the above, just make two functions:

def function_ab(args): return args.a + args.b
def function_cd(args): return args.c + args.d

One good thing of Python is that you don't have to make classes if you
don't need to ...

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused about class relationships

2008-11-27 Thread bieffe62
On 27 Nov, 06:20, John O'Hagan <[EMAIL PROTECTED]> wrote:
> Apologies if this is a D.Q., I'm still learning to use classes, and this
> little problem has proved too specific to find in the tutorials.
>
> I have two classes with a relationship that I find confusing.
>
> One is called Engine, and it has a method (bar_builder) which generates
> instances of the other class, called Bar (not as in Foo but as in bar of
> music; inherits from list).
>
> Also, Bar takes the generating instance of Engine as an argument to its
> __init__ method:
>
> class Bar(list):
>
>         def __init__(self, a_bar, args, engine):
>                 list.__init__ (self, a_bar)
>                 self[:] = a_bar        
>                 self.args = args
>                 self.engine = engine
>                 #more instance attributes...
>
>         #methods...
>
> class Engine:
>
>         def __init__(self, args):
>                 self.args = args                        
>                 #more instance attributes...
>
>         def bar_builder(self):
>                 #body of method generates lists...
>                 yield Bar([generated_list], args, self)
>
>         #more methods...
>
> #(other stuff...)
>
> def main(args):
>
>             engine = Engine(args)
>             bars = engine.bar_builder()
>             for a_bar in bars:
>                 #play the music!...
>
> While this works (to my surprise!) and solves the problem which motivated it
> (i.e. Engine instances need to pass some attributes to Bar instances ), it
> seems too convoluted. Should one class inherit the other? If so, which way
> around? Or is it fine as is?
>
> I'm hoping this is a common trap I've fallen into; I just haven't been able to
> get my head around it. (I'm a musician...)
>
> John O'Hagan


if you need the engine to generate a (potentially) infinite sequence
of bars, this approach seems the most linear
you could do ...
Inheritance is for classes which share attribute/methods, which is not
the case ...

then, there are always many way to to the same thing, and sometime
they are equivalent but for the
programmer taste ...

Ciao
--
FB







--
http://mail.python.org/mailman/listinfo/python-list


Re: Managing timing in Python calls

2008-12-15 Thread bieffe62
On 15 Dic, 16:21, Ross  wrote:
> I'm porting some ugly javascript managed stuff to have an equivalent
> behaviour in a standalone app. It uses events that arrive from a server,
> and various small images.  In this standalone version, the data is local
> in a file and the images in a local directory.
>
> My AJAX code managed a timely presentation of the info, and in the
> Javascript that relied on the ugly:
>
>         myImage.onload = function(){dosomething_when_it's_finished}
>
> structure. Also, I used the similarly unpretty:
>
>         var t = window.setTimeout( function () { do_when_timed_out}
>
> structures which allows stuff to happen after a perscribed period.
>
> In my python implementation my first guess is to use a thread to load my
> image into a variable
>
>       myImage = wx.Image("aPic.gif",
>                 wx.BITMAP_TYPE_GIF ).ConvertToBitmap()
>
> so that it won't block processing. (Though perhaps it'll just happen so
> fast without a server involved that I won't care.)
>
> Is there a nice equivalent of a 'setTimeout' function in python? ie to
> call a function after some time elapses without blocking my other
> processing?  I suppose just a thread with a time.sleep(x_mS) in it would
> be my first guess?
>
> Can anyone give me some feedback on whether that's a logical path
> forward, or if there are some nicer constructs into which I might look?
>
> Thanks for any suggests... Ross.

Python has in its standard library a timer class which actually is
implemented as a thread (I think) ...
however, when using a GUI package, I think it is better to use gui-
specific functions for event-driven programming,
to make sure that your code do not mess with GUI event loop and to
work around the lack  of thread-safety in some GUI libraries.
This applies to timer/timeouts but also to execute code when specific
I/O events occur ( e.g. the receiving of data from a socket ).

Although I'm not an expert of  pywx, a quick search pointed me to this
page:

http://wxpython.org/onlinedocs.php

from which it seams that WxTimerEvent couldbe what you need.

I agree with you that for loading images from local files a thread
should not be needed.

P.S : notice that the documentation refers to the C++ library on which
the python wrapper is built. This is often the case for
python wrapper of GUI libraries. However, the most important ones come
with a rich set of demo programs (and pywx demo suite is quite
complete) from which one can lear what he needs.

Ciao
-
FB




--
http://mail.python.org/mailman/listinfo/python-list


Re: How to represent a sequence of raw bytes

2008-12-22 Thread bieffe62
On 22 Dic, 03:23, "Steven Woody"  wrote:
> Hi,
>
> What's the right type to represent a sequence of raw bytes.  In C, we usually 
> do
>
> 1.  char buf[200]  or
> 2.  char buf[] = {0x11, 0x22, 0x33, ... }
>
> What's the equivalent representation for above in Python?
>
> Thanks.
>
> -
> narke

Usually, if I have to manipulate bytes (e.g. computing checksum,
etc...) i just use a list of numbers:

buf = [11, 22, 33, ...]

then when I need to put it in a buffer similar to the one in C (e.g.
before sending a packet of bytes through a socket
or another I/O channel), I use struct.pack

import struct
packed_buf = struct.pack('B'*len(buf), buf )

similarly, if I get a packet of bytes from an I/O channel and I need
to do operation on them as single bytes, I do:

buf = struct.unpack('B'*len(packed_buf), packed_buf )

Note that struct.pack and struct.unpack can trasform packed bytes in
other kind of data, too ...

There are other - maybe more efficient - way of handling bytes in
python programs, like using array as already suggested, but, up
to now, I never needed them in my python programs, which are not real-
time stuff, but sometime need to process steady flows of
data.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: If an OS was to be written in Python, how'w it look?

2008-10-06 Thread bieffe62
On 6 Ott, 06:16, process <[EMAIL PROTECTED]> wrote:
> If an OS was to be written in Python and the hardware optimized for
> it, what changes would be made to the hardware to accomodate Python
> strenghs and weaknesses?
>
> Some tagged architecture like in Lisp 
> machines?http://en.wikipedia.org/wiki/Tagged_architecture
>
> What else?

I would say that a python-processor should have dictionary lookup
(hash tables), garbage collection  and dynamic lists implemented by
hardware/firmware.

Maybe in another twenty years ...

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to start thread by group?

2008-10-06 Thread bieffe62
On 6 Ott, 15:24, oyster <[EMAIL PROTECTED]> wrote:
> my code is not right, can sb give me a hand? thanx
>
> for example, I have 1000 urls to be downloaded, but only 5 thread at one time
> def threadTask(ulr):
>   download(url)
>
> threadsAll=[]
> for url in all_url:
>      task=threading.Thread(target=threadTask, args=[url])
>      threadsAll.append(task)
>
> for every5task in groupcount(threadsAll,5):
>     for everytask in every5task:
>         everytask.start()
>
>     for everytask in every5task:
>         everytask.join()
>
>     for everytask in every5task:        #this does not run ok
>         while everytask.isAlive():
>             pass

Thread.join() stops until the thread is finished. You are assuming
that the threads
terminates exactly in the order in which are started. Moreover, before
starting the
next 5 threads you are waiting that all previous 5 threads have been
completed, while I
believe your intention was to have always the full load of 5 threads
downloading.

I would restructure my code with someting like this ( WARNING: the
following code is
ABSOLUTELY UNTESTED and shall be considered only as pseudo-code to
express my idea of
the algorithm (which, also, could be wrong:-) ):


import threading, time

MAX_THREADS = 5
DELAY = 0.01 # or whatever

def task_function( url ):
download( url )

def start_thread( url):
task=threading.Thread(target=task_function, args=[url])
return task

def main():
all_urls = load_urls()
all_threads = []
while all_urls:
while len(all_threads) < MAX_THREADS:
url = all_urls.pop(0)
t = start_thread()
all_threads.append(t)
for t in all_threads
if not t.isAlive():
t.join()
all_threads.delete(t)
time.sleep( DELAY )


HTH

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cookielib in Jython

2008-10-06 Thread bieffe62
On 6 Ott, 13:19, Felipe De Bene <[EMAIL PROTECTED]> wrote:
> Hi There,
> I'm trying to run an App I wrote in Python 2.5.2 in Jython 2.2.1 and
> everything works fine except when I try to import the Standard
> CPython's cookielib. I know this may sound stupid, I could use an
> advice here on what's wrong. Thanks in advance,
> Felipe.
>
> Output:
> Jython 2.2.1 on java1.6.0_07
> Type "copyright", "credits" or "license" for more information.>>> import 
> cookielib
>
> Traceback (innermost last):
>   File "", line 1, in ?
> ImportError: no module named cookielib>>> from cookielib import *
>
> Traceback (innermost last):
>   File "", line 1, in ?
> ImportError: no module named cookielib>>> from CookieLib import *
>
> Traceback (innermost last):
>   File "", line 1, in ?
> ImportError: no module named CookieLib

Obviously, choockielib is not in your jython installation.
If this module is a pure python module and not a wrupper of an
underlying C
module, you could try simple to get is from a CPython installation,
try and
compile it with Jython inside the code. If the module does not use any
feature
of the language introduced after Python 2.2, or other unsupported
modules,
it could work and you can use it inside your program as it was one of
your modules.

HTH

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: HARD REAL TIME PYTHON

2008-10-07 Thread bieffe62
On 7 Ott, 01:25, "Blubaugh, David A." <[EMAIL PROTECTED]> wrote:
> To All,
>
> I have done some additional research into the possibility of utilizing
> Python for hard real time development.  I have seen on various websites
> where this has been discussed before on the internet.  However, I was
> wondering as to how successful anyone has truly been in developing a
> program project either in windows or in Linux that was or extremely
> close to real time constraints? For example is it possible to develop a
> python program that can address an interrupt or execute an operation
> within 70 Hz or less?? Are there any additional considerations that I
> should investigate first regarding this matter??
>
> Thanks,
>
> David Blubaugh
>
> This e-mail transmission contains information that is confidential and may be
> privileged. It is intended only for the addressee(s) named above. If you 
> receive
> this e-mail in error, please do not read, copy or disseminate it in any 
> manner.
> If you are not the intended recipient, any disclosure, copying, distribution 
> or
> use of the contents of this information is prohibited. Please reply to the
> message immediately by informing the sender that the message was misdirected.
> After replying, please erase it from your computer system. Your assistance in
> correcting this error is appreciated.

AFAIK, the requirement for hard real time, is that response time have
to be predictable, rather than
generally 'fast'.
Very high level languages like python use many features which are by
their nature unpredictable or
difficult to predict in their response times: to name a pair, garbage
collection and hash table lookups.
Usually real time programmers tend not to use these features even when
they program with lower level
languages such as C, ot at least to use them only during
initialization, when being predictable is less
important.

So no, I would not use python for hard real time ...
Said that, I have to say that once used python to simulate  the
protocol of a device which my code (in ADA) had to interface. Typical
response times in this protocol was about 10ms, and my small python
simulator usually managed to respond in that time, although sometime
it delayed its response causing the response timeout in my code to
expire ...


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: equivalent of py2exe in other os

2008-10-07 Thread bieffe62
On 7 Ott, 11:32, Astan Chee <[EMAIL PROTECTED]> wrote:
> Hi,
> I was just wondering if there is a equivalent of py2exe on linux
> (centOS) and mac (OS X). I have a python script that uses wx and I dont
> want to install wx on linux/mac machines. What are my choices?
> Thanks
> Astan
>
> --
> "Formulations of number theory: Complete, Consistent, Non-trivial. Choose 
> two." -David Morgan-Mar
>
> Animal Logichttp://www.animallogic.com
>
> Please think of the environment before printing this email.
>
> This email and any attachments may be confidential and/or privileged. If you 
> are not the intended recipient of this email, you must not disclose or use 
> the information contained in it. Please notify the sender immediately and 
> delete this document if you have received it in error. We do not guarantee 
> this email is error or virus free.

For Linux I know of (but never used myself) PyInstaller :
http://pyinstaller.python-hosting.com/
They seem to have a in-development version for OS/X too, so maybe you
could give it a try ...

HTH
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to start thread by group?

2008-10-08 Thread bieffe62
On 7 Ott, 06:37, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
> En Mon, 06 Oct 2008 11:24:51 -0300, <[EMAIL PROTECTED]> escribió:
>
> > On 6 Ott, 15:24, oyster <[EMAIL PROTECTED]> wrote:
> >> my code is not right, can sb give me a hand? thanx
>
> >> for example, I have 1000 urls to be downloaded, but only 5 thread at  
> >> one time
> > I would restructure my code with someting like this ( WARNING: the
> > following code is
> > ABSOLUTELY UNTESTED and shall be considered only as pseudo-code to
> > express my idea of
> > the algorithm (which, also, could be wrong:-) ):
>
> Your code creates one thread per url (but never more than MAX_THREADS  
> alive at the same time). Usually it's more efficient to create all the  
> MAX_THREADS at once, and continuously feed them with tasks to be done. A  
> Queue object is the way to synchronize them; from the documentation:
>
> 
>  from Queue import Queue
>  from threading import Thread
>
> num_worker_threads = 3
> list_of_urls = ["http://foo.com";, "http://bar.com";,
>                  "http://baz.com";, "http://spam.com";,
>                  "http://egg.com";,
>                 ]
>
> def do_work(url):
>      from time import sleep
>      from random import randrange
>      from threading import currentThread
>      print "%s downloading %s" % (currentThread().getName(), url)
>      sleep(randrange(5))
>      print "%s done" % currentThread().getName()
>
> # from this point on, copied almost verbatim from the Queue example
> # at the end ofhttp://docs.python.org/library/queue.html
>
> def worker():
>      while True:
>          item = q.get()
>          do_work(item)
>          q.task_done()
>
> q = Queue()
> for i in range(num_worker_threads):
>       t = Thread(target=worker)
>       t.setDaemon(True)
>       t.start()
>
> for item in list_of_urls:
>      q.put(item)
>
> q.join()       # block until all tasks are done
> print "Finished"
> 
>
> --
> Gabriel Genellina


Agreed.
I was trying to do what the OP was trying to do, but in a way that
works.
But keeping the thread alive and feeding them the URL is a better
design, definitly.
And no, I don't think its 'premature optimization': it is just
cleaner.

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Inefficient summing

2008-10-09 Thread bieffe62
On 8 Ott, 22:23, beginner <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I have a list of records like below:
>
> rec=[{"F1":1, "F2":2}, {"F1":3, "F2":4} ]
>
> Now I want to write code to find out the ratio of the sums of the two
> fields.
>
> One thing I can do is:
>
> sum(r["F1"] for r in rec)/sum(r["F2"] for r in rec)
>
> But this is slow because I have to iterate through the list twice.
> Also, in the case where rec is an iterator, it does not work.
>
> I can also do this:
>
> sum1, sum2= reduce(lambda x, y: (x[0]+y[0], x[1]+y[1]), ((r["F1"],
> r["F2"]) for r in rec))
> sum1/sum2
>
> This loops through the list only once, and is probably more efficient,
> but it is less readable.
>
> I can of course use an old-fashioned loop. This is more readable, but
> also more verbose.
>
> What is the best way, I wonder?
>
> -a new python programmer

The loop way is probably the right choice.
OTHA, you could try to make more readable the 'reduce' approach,
writing it like this:

def add_r( sums, r ): return sums[0]+r['F1'], sums[1]+r['F2']
sum_f1, sum_f2 = reduce( add_r, rec, (0,0) )
result = sum_f1/sum_f2

Less verbose than the for loop, but IMO almost as understandable : one
only needs to know the semantic
of 'reduce' (which for a python programmer is not big thing) and most
important the code does only one thing per line.


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Clever way of sorting strings containing integers?

2008-10-09 Thread bieffe62
On 9 Ott, 09:41, Holger <[EMAIL PROTECTED]> wrote:
> I tried to do this elegantly, but did not come up with a good solution
>
> Sort strings like
> foo1bar2
> foo10bar10
> foo2bar3
> foo10bar2
>
> So that they come out:
> foo1bar2
> foo2bar3
> foo10bar2
> foo10bar10
>
> I.e. isolate integer parts and sort them according to integer value.
>
> Thx
> Holger

This should work, if you have all stribngs in memory:

import re
REXP = re.compile( r'\d+' )
lines = ['foo1bar2', 'foo10bar10', 'foo2bar3', 'foo10bar2' ]
def key_function( s ): return map(int, re.findall(REXP, s ))
lines.sort( key=key_function)


Ciao

FB


--
http://mail.python.org/mailman/listinfo/python-list


Re: NameError question - def(self, master) - master not in namespace within class?

2008-10-09 Thread bieffe62
On 9 Ott, 17:43, harijay <[EMAIL PROTECTED]> wrote:
> Hi I am new to writing module and object oriented python code. I am
> trying to understand namespaces and classes in python.
>
> I have the following test case given in three files runner , master
> and child. I am getting an error within child where in one line it
> understands variable master.name and in the next line it gives a
> NameError as given here
>
> "    print "Reset name now from %s to %s , oldname %s is saved in
> mastertrash" % (master.trash, master.name , master.trash)
> NameError: name 'master' is not defined"
>
> Sorry for a long post because I dont know how to  frame my question.
> I am pasting the code contained in the three files and the error
> message here
> Thanks for your help
> harijay
>
> The detailed error I get is
> hazel:tmp hari$ python runner.py
> Traceback (most recent call last):
>   File "runner.py", line 3, in 
>     import child
>   File "/Users/hari/rpc-ccp4/tmp/child.py", line 1, in 
>     class child():
>   File "/Users/hari/rpc-ccp4/tmp/child.py", line 9, in child
>     print "Reset name now from %s to %s , oldname %s is saved in
> mastertrash" % (master.trash, master.name , master.trash)
> NameError: name 'master' is not defined
>
> #File runner.py
> #!/usr/bin/python
> import master
> import child
>
> if __name__=="__main__":
>         print "RUNNING RUNNER"
>         m = master.master("hj","oldhj")
>         s = child.child(m)
>         print "Now I have the variable master name %s and master.trash %s" %
> (m.name , m.trash)
>
> #File master.py
> class master():
>         name=""
>         trash=""
>
>         def __init__(self,name,trash):
>                 self.name = name
>                 self.trash = trash
>
> #File child.py
> class child():
>         def __init__(self,master):
>                 print "Master  name is %s" % master.name
>                 print  "Now seeting master name to setnameinchild in child.py 
> "
>                 tmp = master.trash
>                 master.trash = master.name
>                 master.name = "setnameinchild"
>         print "Reset name now from %s to %s , oldname %s is saved in
> mastertrash" % (master.trash, master.name , master.trash)


You need to have an import master in child.py too.

Ciao
-
FB

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python memory usage

2008-10-29 Thread bieffe62
On 21 Ott, 17:19, Rolf Wester <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have the problem that with long running Python scripts (many loops)
> memory consumption increases until the script crashes. I used the
> following small script to understand what might happen:
>
> import gc
>
> print len(gc.get_objects())
>
> a = []
> for i in range( 400 ):
>     a.append( None )
> for i in range( 400 ):
>     a[i] = {}
>
> print len(gc.get_objects())
>
> ret = raw_input("Return:")
>
> del a
> gc.collect()
>
> print len(gc.get_objects())
>
> ret = raw_input("Return:")
>
> The output is:
> 4002706
> Return:
> 2705
> Return:
>
> When I do ps aux | grep python before the first "Return" I get:
> wester    5255 51.2 16.3 1306696 1286828 pts/4 S+   17:59   0:30 python
> memory_prob2.py
>
> and before the second one:
> wester    5255 34.6 15.9 1271784 1255580 pts/4 S+   17:59   0:31 python
> memory_prob2.py
>
> This indicates that although the garbage collector freed 401 objects
> memory consumption does not change accordingly.
>
> I tried the C++ code:
>
> #include 
> using namespace std;
>
> int main()
> {
>         int i;
>         cout << ":";
> //ps 1
>         cin >> i;
>
>         double * v = new double[4000];
>         cout << ":";
> //ps 2
>         cin >> i;
>
>         for(int i=0; i < 4000; i++)
>                 v[i] = i;
>
>         cout << v[4000-1] << ":";
> //ps 3
>         cin >> i;
>
>         delete [] v;
>
>         cout << ":";
> //ps 4
>         cin >> i;
>
> }
>
> and got from ps:
>
> ps 1: 11184
> ps 1: 323688
> ps 1: 323688
> ps 1: 11184
>
> which means that the memery which is deallocated is no longer used by
> the C++ program.
>
> Do I miss something or is this a problem with Python? Is there any means
> to force Python to release the memory that is not used any more?
>
> I would be very appreciative for any help.
>
> With kind regards
>
> Rolf



To be sure that the deallocated memory is not cached at some level to
be reused, you could try
someting like this:

while 1:
l = [dict() for i in range(400)]
l = None # no need of gc and del

For what is worth, on my PC ( Windows XP and Python 2.5.2) the memory
usage of the process
monitored with the Task manager grows up to 600 MB before the memory
is actually released.

Note that in your example, as in mine, you do not need to call
gc.collect(), because
the huge list object is already deleted when you do "del a" ( or in my
case when I reassign "l" and the
huge list drops to 0 reference counts ). The basic memory garbage
collector in CPython
is based on reference counts; gc is only used to find and break
circular reference chains,
which your example do not create. As a proof of that, if ypu print the
retuirn value of
gc.collect (whic is the number of collected objects) you should get 0.

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to use logging module to log an object like print()

2008-10-29 Thread bieffe62
On 29 Ott, 12:24, Steve Holden <[EMAIL PROTECTED]> wrote:
> Diez B. Roggisch wrote:
> > davy zhang schrieb:
> >> mport logging
> >> import pickle
>
> >> # create logger
> >> logger = logging.getLogger("simple_example")
> >> logger.setLevel(logging.DEBUG)
> >> # create console handler and set level to debug
> >> ch = logging.StreamHandler()
> >> ch.setLevel(logging.DEBUG)
> >> # create formatter
> >> formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s
> >> - %(message)s ")
> >> # add formatter to ch
> >> ch.setFormatter(formatter)
> >> # add ch to logger
> >> logger.addHandler(ch)
>
> >> d = {'key':'msg','key2':'msg2'}
>
> >> # "application" code
> >> logger.debug("debug message",d)#can not do this
>
> > logger.debug("yes you can: %r", d)
>
> One deficiency of this approach, however, is that the string formatting
> is performed even when no logging is required, thereby wasting a certain
> amount of effort on unnecessary formatting.
>
> regards
>  Steve
> --
> Steve Holden        +1 571 484 6266   +1 800 494 3119
> Holden Web LLC              http://www.holdenweb.com/- Nascondi testo citato
>

Sure about that?

This is the implementation of Logger.debug in
the file : ..Python25\lib\logging\__init__.py

   def debug(self, msg, *args, **kwargs):
"""
Log 'msg % args' with severity 'DEBUG'.

To pass exception information, use the keyword argument
exc_info with
a true value, e.g.

logger.debug("Houston, we have a %s", "thorny problem",
exc_info=1)
"""
if self.manager.disable >= DEBUG:
return
if DEBUG >= self.getEffectiveLevel():
apply(self._log, (DEBUG, msg, args), kwargs)


The other methods (info, warning, ...) are similar. It looks like
the formatting is only done if actually used.

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with writing fast UDP server

2008-11-20 Thread bieffe62
On 20 Nov, 16:03, Krzysztof Retel <[EMAIL PROTECTED]>
wrote:
> Hi guys,
>
> I am struggling writing fast UDP server. It has to handle around 1
> UDP packets per second. I started building that with non blocking
> socket and threads. Unfortunately my approach does not work at all.
> I wrote a simple case test: client and server. The client sends 2200
> packets within 0.137447118759 secs. The tcpdump received 2189 packets,
> which is not bad at all.
> But the server only handles 700 -- 870 packets, when it is non-
> blocking, and only 670 – 700 received with blocking sockets.
> The client and the server are working within the same local network
> and tcpdump shows pretty correct amount of packets received.
>
> I included a bit of the code of the UDP server.
>
> class PacketReceive(threading.Thread):
>     def __init__(self, tname, socket, queue):
>         self._tname = tname
>         self._socket = socket
>         self._queue = queue
>         threading.Thread.__init__(self, name=self._tname)
>
>     def run(self):
>         print 'Started thread: ', self.getName()
>         cnt = 1
>         cnt_msgs = 0
>         while True:
>             try:
>                 data = self._socket.recv(512)
>                 msg = data
>                 cnt_msgs += 1
>                 total += 1
>                 # self._queue.put(msg)
>                 print  'thread: %s, cnt_msgs: %d' % (self.getName(),
> cnt_msgs)
>             except:
>                 pass
>
> I was also using Queue, but this didn't help neither.
> Any idea what I am doing wrong?
>
> I was reading that Python socket modules was causing some delays with
> TCP server. They recomended to set up  socket option for nondelays:
> "sock.setsockopt(SOL_TCP, TCP_NODELAY, 1) ". I couldn't find any
> similar option for UDP type sockets.
> Is there anything I have to change in socket options to make it
> working faster?
> Why the server can't process all incomming packets? Is there a bug in
> the socket layer? btw. I am using Python 2.5 on Ubuntu 8.10.
>
> Cheers
> K

Stupid question: did you try removing the print (e.g. printing once
every 100 messages) ?

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: A tale of two execs

2009-02-23 Thread bieffe62
On Feb 23, 5:53 pm, aha  wrote:
> Hello All,
>   I am working on a project where I need to support versions of Python
> as old as 2.3. Previously, we distributed Python with our product, but
> this seemed a bit silly so we are no longer doing this.  The problem
> that I am faced with is that we have Python scripts that use the
> subprocess module, and subprocess is not available in Python 2.3.
> Below is the strategy I am thinking about using, however if, you have
> better ideas please let me know.
>
> def runner(cmd, stdin, stdout, ...):
>   try:
>     import subprocess
>     sbm = 1
>   except:
>     sbm = 0
>
>   # Now do something
>   if sbm:
>     process = subporcess(...)
>   else:
>     import popen2
>     process = popen2.Popen4(...)
>
> Has anyone else run into a situation similar to this one?

IIRC,  subprocess is a pure python module. If so, tou could try if
subprocess compiles under 2.3.
If so, or if it has easily fixable problems, you could rename it as
someting like 'backported_subprocess.py'
and include it in your app, which then should do:

try:
   import subprocess
except ImportError:
   import backported_subprocess as subprocess

This should minimize the change at your code. Of course if the usage
of subprocess is restricted
to few functions, your approach to provide two alternative
implementations of those function
is also workable. However, check if/when the module popen2, which is
deprecated, is planned for
remioval, otherwise you will have the same problem later.

HTH

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: A tale of two execs

2009-02-23 Thread bieffe62
On Feb 23, 6:06 pm, [email protected] wrote:
> On Feb 23, 5:53 pm, aha  wrote:
>
>
>
>
>
> > Hello All,
> >   I am working on a project where I need to support versions of Python
> > as old as 2.3. Previously, we distributed Python with our product, but
> > this seemed a bit silly so we are no longer doing this.  The problem
> > that I am faced with is that we have Python scripts that use the
> > subprocess module, and subprocess is not available in Python 2.3.
> > Below is the strategy I am thinking about using, however if, you have
> > better ideas please let me know.
>
> > def runner(cmd, stdin, stdout, ...):
> >   try:
> >     import subprocess
> >     sbm = 1
> >   except:
> >     sbm = 0
>
> >   # Now do something
> >   if sbm:
> >     process = subporcess(...)
> >   else:
> >     import popen2
> >     process = popen2.Popen4(...)
>
> > Has anyone else run into a situation similar to this one?
>
> IIRC,  subprocess is a pure python module. If so, tou could try if
> subprocess compiles under 2.3.

...

I checked, and, for windows platform subprocess.py uses the modules
mvscrt and _subprocess, which I ham unable to
locate on my windows XP python 2.6 installation. This make the whole
thing harder, even impossible if _subprocess has
been created especially for subprocess.py.

For non-windows platform, subprocess.py seem to use only fairly well-
established module, so there is a chance to backport it.

Ciao again
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: don't understand behaviour of recursive structure

2009-03-14 Thread bieffe62
On 14 Mar, 17:31, Dan Davison  wrote:
> I'm new to python. Could someone please explain the following behaviour
> of a recursive data structure?
>
> def new_node(id='', daughters=[]):
>     return dict(id=id, daughters=daughters)
>

Most probably, here is the problem : try this instead:

def new_node(id='', daughters=None):
 if not daughters: daughters = []
 return dict(id=id, daughters=daughters)

This is one of the less intuitive points in python: default values are
evaluated only once,
at 'compile' time I think. So when you call twice 'new_node' without
specifying the daughters
parameters, both dict will have the _same_  list.  Hence chaos

In other words, it is exactly as if you wrote:

EmptyList = []
def new_node(id='', daughters=EmptyList):
 return dict(id=id, daughters=daughters)

See the problem now? If not try this:

l1 = []
l2 = l1
l1.append(1)
print l2

See now? The same happens inside your 'nodes'.


Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Strange crash issue on Windows w/ PyGTK, Cairo...

2009-03-18 Thread bieffe62
On Mar 18, 6:20 am, CJ Kucera  wrote:
> Hello list!
>
> I'm having a strange issue, and I'm not entirely certain yet where
> the actual problem is (ie, Python, PyGTK, or gtk+), but I figure I'll
> start here.  Bear with me, this'll probably be a long explanation...
>
> I've been building an app which is meant to be run on both Linux and
> Windows.  It uses PyGTK for its GUI, and the main area of the app is
> a gtk.DrawingArea which I draw on using PyCairo.  I've been developing
> on Linux, and it works great on that platform, with no issues that
> I'm aware of.  When running on Windows, though, the app exhibits the
> following behavior:
>
>   1) When the .py of the main file which runs the application GUI first
>   gets compiled to a .pyc (ie: the first time it's run, or the first
>   time after .py modification), the application runs totally fine, with
>   no apparent problems.
>
>   2) Any attempt AFTER that, the application will start up, *start* to
>   do its data-loading, but then almost immediately crash with an
>   enigmatic "python.exe has generated errors and will be closed by
>   Windows."  When it does so, there is no output whatsoever to the
>   console that the application was launched from, and the crash doesn't
>   always happen in exactly the same place.
>
> The pattern remains the same, though - if the .pyc needs to be compiled,
> the application works fine, but if not: boom.
>
> I've been steadily stripping the program down to what I hoped would be a
> small, reproducible app that I could post here, and I do intend to do so
> still, but it's rather slow going.  For now, I was hoping to see if
> anyone's ever heard of behavior like this before, and might know what
> to do about it, or at least a possible avenue of attack.
>
> As I've been reducing the program down, I've encountered even stranger
> (IMO) behavior...  In one instance, changing a function name seemed to
> make the program work.  I took out the handler which draws my app's
> "About" box, and suddenly my problem went away.  Occasionally I would
> remove a function and the app would suddenly *always* fail with that
> Windows crash error, and I'd have to put the function back in.  Keep
> in mind, these are functions which *aren't being called anywhere.*
>
> Sometimes I could replace a function's entire contents with just "pass"
> and the app would suddenly behave properly, or not behave at all.
>
> It's almost as if whatever's doing the byte-compilation is getting
> screwed up somehow, and really small changes to parts of the file which
> aren't even being touched are having a huge impact on the application as
> a whole.  It's seriously vexing, and certainly the oddest problems I've
> seen in Python.
>
> Windows versions I can reproduce this on: XP and win2k
> Python versions I've reproduced this on:
>   Python 2.5.4 with:
>     PyGTK 2.12.1-2-win32-py2.5
>     PyGObject 2.14.1-1.win32-py2.5
>     PyCairo 1.4.12-1.win32-py2.5
>   Python 2.6.1 with:
>     PyGTK 2.12.1-3-win32-py2.6
>     PyGObject 2.14.2-2.win32-py2.6
>     PyCairo 1.4.12-2.win32-py2.6
> gtk+ 2.12.9-win32-2 (fromhttp://sf.net/projects/gladewin32, which is
> the version linked to from pygtk.org)
>
> The 2.6 Python stuff I've actually only tried on win2k so far, not XP,
> though given my history with this, I suspect that that wouldn't make a
> difference.
>
> Since gtk+ is the one bit of software that hasn't been swapped out for
> another version, I suppose that perhaps that's where the issue is, but
> it seems like Python should be able to at least throw an Exception or
> something instead of just having a Windows crash.  And having it work
> the FIRST time, when the .pyc's getting compiled, is rather suspicious.
>
> Anyway, I'll continue trying to pare this app down to one manageable
> script which I can post here, but until then I'd be happy to hear ideas
> from anyone else about this.
>
> Thanks!
>
> -CJ

It looks like some of the C extension you are using is causing a
segfault or similar in python
interpreter (or it could be a bug in the interpreter itself, but it is
a lot less likely).
I would suggest to fill the startup portion of your code with trace
statements to try to understand which module function is the
troublesome one, then go looking in the big tracking system of the
module, try the newest version and ask on the dedicated mailing list
if any.

Making a small script that cabn reproduce the bug is also a very good
idea, and will help speed-up the problem solution.

Ciao
---
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to do this in Python?

2009-03-18 Thread bieffe62
On Mar 18, 2:00 am, Jim Garrison  wrote:

>  I don't want "for line in f:" because binary
> files don't necessarily have lines and I'm bulk processing
> files potentially 100MB and larger.  Reading them one line
> at a time would be highly inefficient.
>
> Thanks- Hide quoted text -
>
> - Show quoted text -

For what I know, there are at least two levels of cache between your
application
and the actual file: python interpreter caches its reads, and the
operating system
does that too. So if you are worried about reading efficiently the
file, I think you can stop
worry. Instead, if you are processing files which might not have line
termination at all,
then reading in blocks is the right thing to do.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Strange crash issue on Windows w/ PyGTK, Cairo...

2009-03-18 Thread bieffe62
On Mar 18, 2:33 pm, CJ Kucera  wrote:
> [email protected] wrote:
> > It looks like some of the C extension you are using is causing a
> > segfault or similar in python
> > interpreter (or it could be a bug in the interpreter itself, but it is
> > a lot less likely).
>
> Okay...  I assume by "C extension" you'd include the PyGTK stuff, right?
> (ie: pycairo, pygobject, and pygtk)  Those are the only extras I've got
> installed, otherwise it's just a base Python install.
>
> Would a bad extension really cause this kind of behavior though?
> Specifically the working-the-first-time and crash-subsqeuent-times?  Do
> C extensions contribute to the bytecode generated while compiling?
>

If you have worked with C/C++, you know that memory-related bugs can
be very tricky.
More than once - working with C code - I had crashes that disappeared
if I just added
a 'printf', because the memory allocation scheme changed and the
memory corrupted was not anymore
relevant.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to do this in Python? - A "gotcha"

2009-03-19 Thread bieffe62
On Mar 18, 6:06 pm, Jim Garrison  wrote:
> S Arrowsmith wrote:
> > Jim Garrison   wrote:
> >> It's a shame the iter(o,sentinel) builtin does the
> >> comparison itself, instead of being defined as iter(callable,callable)
> >> where the second argument implements the termination test and returns a
> >> boolean.  This would seem to add much more generality... is
> >> it worthy of a PEP?
>
> > class sentinel:
> >     def __eq__(self, other):
> >         return termination_test()
>
> > for x in iter(callable, sentinel()):
> >     ...
>
> > Writing a sensible sentinel.__init__ is left as an exercise
>
> If I understand correctly, this pattern allows me to create
> an object (instance of class sentinel) that implements whatever
> equality semantics I need to effect loop termination.  In the
> case in point, then, I end up with
>
>      class sentinel:
>          def __eq__(self,other):
>              return other=='' or other==b''
>
>      with open(filename, "rb") as f:
>          for buf in iter(lambda: f.read(1000), sentinel())):
>              do_something(buf)
>
> i.e. sentinel is really "object that compares equal to both ''
> and b''".  While I appreciate how this works, I think the
> introduction of a whole new class is a bit of overkill for
> what should be expressible in iter()- Hide quoted text -
>
> - Show quoted text -


In the specific case it should not be needed to create a class,
because
at least with python 2.6:

>>> b'' == ''
True
>>> u'' == ''
True
>>>

so you should be able to do:

  with open(filename, "rb") as f:
  for buf in iter(lambda: f.read(1000), "" ):
  do_something(buf)


Ciao
--
FB


--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing and Tk GUI program (won't work under Linux)

2009-03-20 Thread bieffe62
On Mar 20, 4:36 am, akineko  wrote:
> Hello everyone,
>
> I have started using multiprocessing module, which is now available
> with Python 2.6.
> It definitely opens up new possibilities.
>
> Now, I developed a small GUI package, which is to be used from other
> programs.
> It uses multiprocessing and Pipes are used to pump image data/command
> to the GUI process.
> (I used multiprocessing because I got stack size problem if I used
> threading)
>
> It works great under Solaris environment, which is my primary
> development environment.
>
> When I tried the program under Linux (CentOS5), the program didn't
> work (it hung).
> My other programs that use multiprocessing work flawlessly under both
> Solaris and Linux.
>
> To investigate this problem, I create a much simpler test program. The
> test program uses only basic necessary codes, nothing else. But my
> simple test program still exhibits the same problem.
>
> My test program display a GUI Button using three possible approaches:
>
> (1) multiprocessing   (Solaris - okay, Linux - hung)
> (2) threading            (Solaris - okay, Linux - okay)
> (3) none (main thread) (Solaris - okay, Linux - okay)
>
> Is this a bug in a multiprocessing package? Or, I overlooked
> something?
>
> Any comments on resolving this problem will be greatly appreciated.
>
> The attached is my test program (sorry for posting a long program).
>
> Thank you!
> Aki Niimura
>
> #!/usr/bin/env python
>
> import sys, os
> import time
> import threading
> import multiprocessing
>
> from Tkinter import *
>
> ###
> ###     class Panel
> ###
>
> class Panel:
>
>     def __init__(self, subp='multip'):
>         if subp == 'multip':
>             print 'multiprocessing module to handle'
>             # GUI process
>             self.process1 = multiprocessing.Process(target=self.draw)
>             self.process1.start()
>         elif subp == 'thread':
>             print 'threading module to handle'
>             # GUI thread
>             self.thread1 = threading.Thread(target=self.draw)
>             self.thread1.start()
> #           self.thread1.setDaemon(1)
>         else:
>             print 'main thread to handle'
>             pass
>
>     def draw(self):
>         self.root = Tk()
>         w = Button(self.root, text='Exit', command=self.root.quit)
>         w.pack()
>         self.root.mainloop()
>
> ###
> ###     Main routine
> ###
>
> def main():
>     subp = 'multip'
>     if len(sys.argv) >= 2:
>         if not sys.argv[1] in ['multip', 'thread', 'none',]:
>             print 'Invalid option: %s' % sys.argv[1]
>             print "Valid options are 'multip', 'thread', 'none'"
>             sys.exit(1)
>         else:
>             subp = sys.argv[1]
>     panel = Panel(subp)
>     if subp == 'none':
>         panel.draw()
>     while 1:
>         time.sleep(1)
>     pass
>
> if __name__ == '__main__':
>     main()

It is just a guess, but did you try making 'draw' a function and not a
method?
I read that parameters to the subprocess function shall be pickable;
in your test
program, 'draw' as 'self' as parameter, which is a Tkinter.Panel, and
I read somewhere
that Tkinter objects are not pickable ...

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Need guidelines to show results of a process

2009-03-20 Thread bieffe62
On Mar 20, 1:00 am, Vizcayno  wrote:
> Hi:
> I wrote a Python program which, during execution, shows me messages on
> console indicating at every moment the time and steps being performed
> so I can have a 'log online' and guess remaining time for termination,
> I used many 'print' instructions to show those messages, i.e.  print
> "I am in step 4 at "+giveTime()  print "I am in step 5 at
> "+giveTime(), etc.
> Now I need to execute the same program but from a GUI application. I
> must show the same messages but on a "text field".
> As you can guess, it implies the changing of my program or make a copy
> and replace the print instructions by   textField += "I am in step 4
> at "+giveTime() then textField += "I am in step 5 at "+giveTime(),
> etc.
> I wanted to do the next:
> if output == "GUI":
>     textField += "I am in step 4 at "+giveTime()
>     force_output()
> else:
>     print "I am in step 4 at "+giveTime()
> But it is not smart, elegant, clean ... isn't it?
> Any ideas please?
> Regards.

If you have many sparse print and you don't want to change all by
hand,
you can try redirecting sys.stdout (and maybe sys.stderr) to something
that collects the output and make it available for display however you
like it. The only requirement is that your sys.output replacement
shall
have a 'write' method wich accept a string parameter.
I did it once I needed to quickly add a GUI (Tkinter) to a console-
based
program, and it worked well.

IIRC the wxPython toolkit does that by default, unless you tell it
differently,
and displays any output/error in a default dialog window.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Re. suid/sudo in python

2009-03-30 Thread bieffe62
On Mar 30, 1:16 pm, Rustom Mody  wrote:
> Ben Finney wrote
>
> > The key thing to realise is that, having relinquished privilege, the same 
> > process can't get it back again as easily. So if you need to
> > do some tasks as a privileged user, do those *very* early and then drop the 
> > privileges for the rest of the life of the process.
>
> > Taking this further, you should isolate exactly what tasks need root 
> > privilege into a separate process altogether, and make
> > that process as well-tested and simple as possible: it should do nothing 
> > *but* those tasks for which it needs root privilege.
>
> I dont think this would be easy or convenient (if at all possible) in my case.
>
> I am trying to write a tiny web based application that will give an
> overall picture of LVM, Volume groups, Raid, SCSI and the underlying
> disk partitions. The administrative tools dealing with low level
> storage stack (e.g. fdisk, pvcreate, vgcreate, lvcreate, mdadm etc.)
> need to be run as root.
>
> However since this runs behind apache. Apache creates a separate user
> for the webserver. Hence the CGI scripts or any other tools that they
> call run as that user.
>
> The solution currently is
> - Write the CGI program in C, put setuid(0), setgid(0) statements in
> that file and then perform any other actions (including calling other
> scripts)
> - Set the S bit of the executable of the CGI binary compiled from the
> C file (chmod +S xxx.cgi)
>
> Yeah yeah "Security! HOLE!!" etc but please note that this is running
> on linux on vmware on an otherwise secure system.
>
> So whats the best way of doing this in python?

Have a 'server process' running with root privilege ( a script started
by a privileged account)  and implement a protocol to ask for system
info from your cgi scripts under apache. In python this is a lot
easier than it sounds.
The simplest case would be that to send a 'system command' to the
server through a unix socket, the server
executes the command as received and returns the command output. Not
more than a day work, I believe. Not much more secure that
a setuid python script, also, maybe less :-)
A better implementation would be such that the protocol only allows
for a set of pre-defined safe requests ...

Ciao
--
FB

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Which is more Pythonic? (was: Detecting Binary content in files)

2009-04-01 Thread bieffe62
On Apr 1, 5:10 pm, John Posner  wrote:
> Dennis Lee Bieber presented a code snippet with two consecutive statements
> that made me think, "I'd code this differently". So just for fun ... is
> Dennis's original statement or my "_alt" statement more idiomatically
> Pythonic? Are there even more Pythonic alternative codings?
>
>    mrkrs = [b for b in block
>      if b > 127
>        or b in [ "\r", "\n", "\t" ]       ]
>
>    mrkrs_alt1 = filter(lambda b: b > 127 or b in [ "\r", "\n", "\t" ],
> block)
>    mrkrs_alt2 = filter(lambda b: b > 127 or b in list("\r\n\t"), block)
>

Never tested my 'pythonicity', but I would do:

def test(b) : b > 127 or b in r"\r\n\t"
mrkrs = filter( test, block )

Note: before starting to study haskell, I would probably have used the
list comprehension. Still can't stand anonimous functions though.



> (Note: Dennis's statement converts a string into a list; mine does not.)
>
> ---
>
>    binary = (float(len(mrkrs)) / len(block)) > 0.30
>
>    binary_alt = 1.0 * len(mrkrs) / len(block) > 0.30
>

I believe now one should do (at least on new code):

from __future__ import division # not needed for python 3.0
binary = ( len( mrks) / len (blocks) ) > 3.0

In the past, I often used the * 1.0 trick, but nevertheless believe
that it is better
using explicit cast.

> -John
>


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Killing threads

2009-04-06 Thread bieffe62
On Apr 5, 9:48 pm, Dennis Lee Bieber  wrote:
> On Sun, 05 Apr 2009 12:54:45 +0200, Francesco Bochicchio
>  declaimed the following in
> gmane.comp.python.general:
>
> > If yor threads are not set as 'deamons' using Thread.setDaemon method,
> > then your main program at its termination should call Thread.join for
> > each of the thread spawned, otherwise the whole process will not quit.
>
>         .join() alone won't do anything but wait for the thread itself to
> quit -- which means one still has to signal the threads to commit
> suicide.
>

Yes. Mine was an 'additional suggestion' to the ones that the OP
already received.
I guests that was not clear enough ...

.. follows a nice explanation on methods to stop threads that I was
too lazy to write ...

>
>         If the thread has the capability to become blocked on some operation
> (say a socket read without timeout), none of these solutions will work.
> That just leaves setting the threads daemonic at the start -- which
> indicates the runtime may brutally kill them when the main program
> exits.
>

You know, this bugger me a little. I know that killing threads is hard
in any language
(I'm facing now the issue in a C++ program I'm writing at work),
expecially doing in
a platform-independent way, but Java managed to do it. Now python is
in many ways an
higher level language than Java, but when it comes to threading I feel
it lacks something.
I know that often it is not too hard to avoid blocking reads, and you
can always use subprocesses that
with the new multiprocessing module are almost as easy as threads, but
still ...

Ciao
-
FB

--
http://mail.python.org/mailman/listinfo/python-list


Re: Killing threads

2009-04-06 Thread bieffe62
On 6 Apr, 05:25, [email protected] wrote:
> On Apr 5, 11:07 pm, Dennis Lee Bieber  wrote:
>
>
>
>
>
> > On Sun, 5 Apr 2009 17:27:15 -0700 (PDT), imageguy
> >  declaimed the following in
> > gmane.comp.python.general:
>
> > > In threading.Event python 2.5 docs say;
> > > "This is one of the simplest mechanisms for communication between
> > > threads: one thread signals an event and other threads wait for it. "
>
> > > Again, I have limited experience, however, in my reading of the
> > > threading manual and review examples, Events were specifically design
> > > to be a thread safe way to communicate a 'state' to running threads ?
> > > In the OP's example 'do stuff' was open to wide interpretation,
> > > however, if within the thread's main 'while' loop the tread checks to
> > > see if the 'keepgoing' Event.isSet(), in what scenario would this
> > > create deadlock ?
>
> >         If you are going to perform a CPU intensive polling loop, there is
> > no sense in using the Event system in the first place... Just create a
> > globally accessible flag and set it to true when you want to signal the
> > threads (or false if you don't want to use the negation "while not
> > flagged: do next processing step")
>
> >         Event is optimized for the case wherein threads can WAIT (block) on
> > the Event object.
> > --
> >         Wulfraed        Dennis Lee Bieber               KD6MOG
> >         [email protected]             [email protected]
> >                 HTTP://wlfraed.home.netcom.com/
> >         (Bestiaria Support Staff:               [email protected])
> >                 HTTP://www.bestiaria.com/
>
> Well it turns out my problem was with queues not with threads.  I had
> a self.die prop in my thread object that defaults to FALSE and that I
> set to true when i wanted the thread to die.  then my loop would be
> while not die:  It seemed pretty simple so I didn't know why it was
> failing.  What I didn't know, because I'm quite new to python, is that
> queue.get was blocking.  So my producer thread why dying immediately
> but my worker threads were all blocking on their queue.gets.  So they
> were never falling off the loop.  I changed it to queue.get_nowait()
> and added a queue.empty exception and everything worked as expected.
>
> So I thought I knew what was going on and that I was having a really
> esoteric problem when i was actually having a pretty boring problem I
> didn't recognize.
>
> Thanks everybody for the help!>

I've gone through that also, when I started with python threads :-)
Be aware that using get_nowait may lead to your thread using too much
CPU in checking a queue often empty. I tend to use  Queue.get with a
timeout, smaller enough to keep the thread responsive but large enough
not
to waste CPU in too-frequent checks.

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: segmentation fault while using ctypes

2009-04-14 Thread bieffe62
On Apr 15, 12:39 am, sanket  wrote:
> Hello All,
>
> I am dealing with this weird bug.
> I have a function in C and I have written python bindings for it using
> ctypes.
>
> I can call this function for couple of times and then suddenly it
> gives me seg fault.
> But I can call same function from a C code for any number of times.
>
> I cannot get what's going on.
>
> here is my code.
>
> /**/
> /* C Function I am calling */
> int get_hash(char *filename,int rate,int ch,unsigned char* hash,
> unsigned int* hash_size,short* avg_f,short* avg_d){
>
> /* some variable declarations here */
> fp = fopen(filename,"rb");
>
> data = (signed short *)malloc(sizeof(signed short) * N_BLOCKS);
>
> whereami = WAVE_HEADER_SIZE;
> while((!feof(fp)) && (fp_more == 1) && !ferror(fp)){
>      fp_data_size = fread(data,sizeof(signed short),N_BLOCKS,fp);
>      whereami += fp_data_size;
>      fp_more = fp_feed_short(fooid,data,fp_data_size); // call to some
> library funtion
>  } //end while
>
> /* some arithmetic calculations here */
>
>   n = my_fp_calculate(fooid,audio_length,fp_fingerprint,&fit,&dom);
>
>   if (data != NULL)
>       free(data)
>   fclose(fp)
>   return n;
>
> }
>
> /* END OF C FUNCTION
> */
> 
> Python code
> -
> from ctypes import *
> lib = cdll.LoadLibrary("/usr/lib/libclient.so")
>
> def my_func(filename,rate,ch):
>     hash = (c_ubyte * 424)()
>     hash_size = c_uint()
>     avg_f = c_short(0)
>     avg_d = c_short(0)
>     n = lib.get_hash(filename,rate,ch,hash,byref(hash_size),byref
> (avg_f),byref(avg_d))
>     hash = None
>
> def main():
>     for filename in os.listdir(MY_DIR):
>             print filename
>             my_func(filename,100,10)
>             print
> ""
>
> if __name__ == "__main__":
>     main()
>
> == END OF PYTHON CODE ==
>
> Thank you in advance,
> sanket


You don't show how the C function make use of parameters hash,
hash_size, avg_f, avg_d. Since you pass them by address,
I guess that are output parameters, but how do they get written? In
particular,make sure you don't write into hash more
than the  424 bytes you allocated for it in the calling python
code ...

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python mail truncate problem

2009-05-18 Thread bieffe62
On 18 Mag, 05:51, David  wrote:
> Hi,
>
> I am writing Python script to process e-mails in a user's mail
> account. What I want to do is to update that e-mail's Status to 'R'
> after processing it, however, the following script truncates old e-
> mails even though it updates that e-mail's Status correctly. Anybody
> knows how to fix this?
>
> Thanks so much.
>
>   fp = '/var/spool/mail/' + user
>                 mbox = mailbox.mbox(fp)
>
>                 for key, msg in mbox.iteritems():
>                         flags = msg.get_flags()
>
>                         if 'R' not in flags:
>                                 # now process the e-mail
>                                 # now update status
>                                 msg.add_flag('R' + flags)
>                                 mbox[key] = msg


I have no idea about your problem. However, I believe that  the last
statement
"mbox[key] = msg" is not needed. The objects returned by
dict.iteritems are not copies but
the actual ones in the dictionary, so your msg.add_flag() already
modifies the object in mbox.

HTH

Ciao
-
FB
-- 
http://mail.python.org/mailman/listinfo/python-list