On 10/10/06, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
> Josiah Carlson wrote:
>
> > Presumably with this library you have created, you have also written a
> > fast object encoder/decoder (like marshal or pickle).  If it isn't any
> > faster than cPickle or marshal, then users may bypass the module and opt
> > for fork/etc. + XML-RPC
>
> XML-RPC isn't close to marshal and cPickle in performance, though, so
> that statement is a bit misleading.
>
> the really interesting thing here is a ready-made threading-style API, I
> think.  reimplementing queues, locks, and semaphores can be a reasonable
> amount of work; might as well use an existing implementation.
>

The module uses cPickle.   As for speed, on my old laptop I get maybe
1300 objects through a queue a second.  For many purposes this might
be too slow, in which cases you are better of sticking to threading;
for many other cases that should not be a problem.  It should quite
possible to connect to an ObjectServer on a different machine, though
I have not tried it.

Although I reuse Queue, I wrote locks, semaphores and conditions from
scratch -- I could not see a sensible way to use the original
implementations.  (The implementations of those classes are actually
quite a bit shorter than the ones in threading.py.)

By the way, on windows the example files currently need to be executed
from commandline rather than clicked on (but that is easily fixable).
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to