Python consulting opportunity
We'd like to find an experienced Python programmer for a full-time, three-month consulting contract to help us with our product, Goombah (http://www.goombah.com). Experience with wxPython and/or PyObjC would be a big plus. More importantly we're looking for someone who can get up to speed very quickly and who is used to working independently. A longer-term relationship is possible, but right now we're focused on adding a fixed set of features that we plan to add in the very near-term. It's a lot of work and we'll need help to get it done as quickly as we want. If this sounds like an opportunity you'd be interested in, or if you know of someone who might be a match, please let us know. Thanks, Gary -- Gary Robinson CTO Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
xml.sax.xmlreader and expat
Hi, We're using xml.sax.xmlreader in our app (http://www.goombah.com, which is written in Python). In Python 2.3.x, does that use the C-language expat under the hood? The reason I'm asking is because we're wondering if we can speed up the parsing significantly. Thanks in advance for any input anyone can give. Gary -- Gary Robinson VP/Innovation Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
error: Error -5 while decompressing data from struct.unpack
One of our users received an exception, "error: Error -5 while decompressing data from struct.unpack," in the course of a struct.unpack operation. I haven't been able to discern what Error -5 is in this context. In experiments here I wasn't able to elicit that exception. There's a system exception EINTR which is 5 -- but that's 5, not -5. Any help on this would be most appreciated. -- Gary Robinson CTO Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
dictionaries and threads
Hi, I know the Global Interpreter Lock ensures that only one python thread has access to the interpreter at a time, which prevents a lot of situations where one thread might step on another's toes. But I'd like to ask about a specific situation just to be sure I understand things relative to some code I'm writing. I've got a dictionary which is accessed by several threads at the same time (that is, to the extent that the GIL allows). The thing is, however, no two threads will ever be accessing the same dictionary items at the same time. In fact the thread's ID from thread.get_ident() is the key to the dictionary; a thread only modifies items corresponding to its own thread ID. A thread will be adding an item with its ID when it's created, and deleting it before it exits, and modifying the item's value in the meantime. As far as I can tell, if the Python bytecodes that cause dictionary modifications are atomic, then there should be no problem. But I don't know that they are because I haven't looked at the bytecodes. Any feedback on this would be appreciated. For various reasons, we're still using Python 2.3 for the time being. Gary -- Gary Robinson CTO Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
Thread priorities?
In the application we're writing (http://www.goombah.com) it would be helpful for us to give one thread a higher priority than the others. We tried the recipe here: http://groups-beta.google.com/group/comp.lang.python/msg/6f0e118227a5f5de and it didn't seem to work for us. We don't need many priority levels. We just need one thread to *temporarily* have a higher priority than others. One thing that occurred to me: There wouldn't by any chance be some way a thread could grab the GIL and not let it go until it is ready to do so explicitly? That would have the potential to solve our problem. Or maybe there's another way to temporarily let one thread have priority over all the others? Gary -- Gary Robinson CTO Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
getting the current function
Alex Martelli has a cookbook recipe, whoami, for retrieving the name of the
current function:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66062. It uses
sys._getframe(). I'm a little wary about using sys._getframe() because of the
underscore prefix and the fact that the python docs say "This function should
be used for internal and specialized purposes only."
I feel more comfortable using the equivalent functionality in the inspect
module.
Also, I wanted a way to get access to the current function, not just its name.
I came up with:
import inspect
def thisfunc(n=0):
currentFrame = inspect.currentframe()
outerFrames = inspect.getouterframes(currentFrame)
callingFrame = outerFrames[n + 1][0]
callersCallingFrame = outerFrames[n + 2][0]
return callersCallingFrame.f_locals[callingFrame.f_code.co_name]
The n argument lets you go up the stack to get callers of callers if that's
convenient. For instance:
def attr(name, value):
caller = thisfunc(1)
if not hasattr(caller, name):
setattr(caller, name, value)
def x():
attr('counter', 0)
thisfunc().counter += 1
print thisfunc().counter
x()
x()
x()
Returns the output
1
2
3
Of course you don't need an attr() function for creating a function attribute,
but this way your code won't break if you change the name of the function.
I was about to create a cookbook recipe, but decided to ask for some feedback
from the community first.
Do you see situations where this function wouldn't work right?
Also, the python docs warn about storing frame objects due to the possibility
of reference cycles being created
(http://docs.python.org/lib/inspect-stack.html). But I don't think that's a
worry here since thisfunc() stores the references on the stack rather than the
heap. But I'm not sure. Obviously, it would be easy to add a try/finally with
appropriate del's, but I don't want to do it if it's not necessary.
I welcome feedback of any type.
Thanks,
Gary
--
Gary Robinson
CTO
Emergent Music, LLC
[EMAIL PROTECTED]
207-942-3463
Company: http://www.goombah.com
Blog:http://www.garyrobinson.net
--
http://mail.python.org/mailman/listinfo/python-list
re: getting the current function
> This all seems a bit too complicated. Are you sure you want to do > this? Maybe you need to step back and rethink your problem. In version 2.1 Python added the ability to add function attributes -- see http://www.python.org/dev/peps/pep-0232/ for the justifications. A counter probably isn't one of them, I just used that as a quick example of using thisfunc(). I've just never liked the fact that you have to name the function when accessing those attributes from within the function. And I thought there might be other uses for something like thisfunc(). -- Gary Robinson CTO Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
32-bit python memory limits?
I'm running a Python job on OS X 10.5.3 and the Python 2.5.2 that's available as a binary download at python.org for OS X. I ran a python program tonight that ended up using much more memory than anticipated. It just kept on using more and more memory. Instead of killing it, I just watched it, using Activity Monitor. I assumed that when it had 2GB allocated it would blow up, because I thought 32-bit python could only address 2GB. But Activity Monitor reported that it had allocated 3.99GB of virtual memory before it finally blew up with malloc errors. Was my understanding of a 2GB limit wrong? I guess so! But I'm pretty sure I saw it max out at 2GB on linux... Anybody have an explanation, or is it just that my understanding of a 2GB limit was wrong? Or was it perhaps right for earlier versions, or on linux...?? Thanks for any thoughts, Gary -- Gary Robinson CTO Emergent Music, LLC personal email: [EMAIL PROTECTED] work email: [EMAIL PROTECTED] Company: http://www.emergentmusic.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
exit()
In Python 2.5.2, I notice that, in the interpreter or in a script, I can exit
with:
exit()
But I don't see exit() mentioned as a built-in function; rather the Python
Library Reference says we should use sys.exit(). Also, the reference says
sys.exit() is like raising SystemExit. But so is just calling exit(). For
instance,
exit('some error message')
has the same apparent effect as
raise SystemExit, 'some error message'.
Both return a status code of 1 and print the error string on the console.
Is exit() documented somewhere I haven't been able to find? Is there any reason
to use sys.exit() given exit()'s availability?
If there is an advantage to sys.exit() over exit(), then does sys.exit() have
any advantage over "raise SystemExit, 'some error message'" in cases where a
module has no other reason to import sys?
--
Gary Robinson
CTO
Emergent Music, LLC
personal email: [EMAIL PROTECTED]
work email: [EMAIL PROTECTED]
Company: http://www.emergentmusic.com
Blog:http://www.garyrobinson.net
--
http://mail.python.org/mailman/listinfo/python-list
32-bit python on Opteron, Solaris 10?
I'm in the market for a server to run some python code which is optimized via psyco. Sun T2100 servers come with Solaris 10, which comes with python pre-installed. Since those servers use the 64-bit Opteron box, I would assume that the Python is a 64-bit version. (Does anyone know whether this is true/false?) The Psyco documentation says that for psyco to work, Python needs to be compiled in "32-bit compatibility mode". I've never compiled Python, or tried having multiple versions running on Solaris. I looked at the README for the Python source and didn't see anything about "32-bit compatibility mode", though I may have missed it. Or is it a matter of choosing a 32 bit compiler to compile against? Any info would be appreciated. Finally, I'm wondering if anyone could give any feedback about problems/roadblocks in compiling Python in "32-bit compatibility mode" and running it alongside the pre-installed Python that comes with Solaris 10. Any input or tips would be greatly appreciated. Thanks Gary -- Gary Robinson VP/Innovation Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
Re: Python-list Digest, Vol 31, Issue 94
> I can't see anything >> called a T2100. I have 3 X2100 servers which are opterons. Right I meant X2100's, sorry. > Python cannot use psyco on opterons at all - > 32 bit mode or otherwise. Are you sure? I'm not saying I have reason to believe differently, but I just want to be sure. The Psyco docs say it requires "A 32-bit Pentium or any other Intel 386 compatible processor. Sorry, no other processor is supported. Psyco does not support the 64-bit x86 architecture, unless you have a Python compiled in 32-bit compatibility mode". Sun's "The Solaris Operating System on x86 Platforms" guide says "The AMD64 (64bit x86) architecture was done in a way very similar to how Intel had done the i80386, and processors based on AMD64 (much unlike Itanium/IA64) are, in good old x86 tradition, fully binary backward compatible. Of course, actually using the new 64bit operating mode requires porting operating system and applications (like using 32bit on the i80386 did require at the time). But even when running a 64bit operating system does AMD64 provide a sandboxed 32bit environment to run existing applications in (again, like the i80386 which allowed the same for 16bit programs running on a 32bit OS). Therefore the AMD64 architecture offers much better investment protection than IA64 – which will not run existing 32bit operating systems or applications. " As I read it, this says that when the Opteron is used in "the new 64bit operating mode," as it is on the X2100, it is no longer Intel 386 compatible except in a "sandboxed 32bit environment". Then the question would be whether 32-bit python can be run in the sandboxed 32bit environment. Since I don't have an X2100 yet and haven't played with it or studied the docs yet, I don't know what it entails. Have you explored that? > The T2000 has a new > cpu for which I have no data about python performance. The T2000 has up to 8 relatively slow cores. Python's GIL (and the way the app is designed) eliminate the possibility of making use of more than one core for now. In the future that may change, but I need an immediate solution and re-architecting is not possible right now. > If you can benchmark your own code on a target machine, on solaris, > linux or windows, you can quickly figure out if it's "fast enough". The more speed I have, the better output I'll be able to get. This application is a case where there is no "fast enough". The more speed, the better, within financial constraints. Gary -- Gary Robinson VP/Innovation Emergent Music, LLC [EMAIL PROTECTED] 207-942-3463 Company: http://www.goombah.com Blog:http://www.garyrobinson.net O > Message: 2 > Date: Fri, 07 Apr 2006 02:34:02 GMT > From: ross lazarus <[EMAIL PROTECTED]> > Subject: Re: 32-bit python on Opteron, Solaris 10? > To: [email protected] > Message-ID: <[EMAIL PROTECTED]> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > The answers depend entirely on the cpu in my experience. I'm staring > at http://www.sun.com/servers/index.jsp and I can't see anything > called a T2100. I have 3 X2100 servers which are opterons. Psyco only > runs on x86 cpu hardware. Python cannot use psyco on opterons at all - > 32 bit mode or otherwise. Pypy may fix this soon. The T2000 has a new > cpu for which I have no data about python performance. I am sure it > will run, but it may or may not be super fast if that's important to > you. On the ultrasparcs I have had an opportunity to fool with, python > runs "fast enough" for computationally intensive tasks (ie it's > useable) but relatively slowly compared to the x86 hardware I have > access to - particularly if psyco is available. I was once told that > python was more at home on CISC than on RISC CPU architecture and > being a trusting soul, accept this since it's consistent my own > limited experiments. > > If you can benchmark your own code on a target machine, on solaris, > linux or windows, you can quickly figure out if it's "fast enough". > Exactly what means depends on the throughput you require and a stopwatch. > > Your mileage may vary and there may be sun mavens on the list with > more reliable information than mine. > > Gary Robinson wrote: >> I'm in the market for a server to run some python code which is >> optimized via psyco. >> >> Sun T2100 servers come with Solaris 10, which comes with python >> pre-installed. >> >> Since those servers use the 64-bit Opteron box, I would assume that the >> Python is a 64-bit version. (Does anyone know whether this is >> true/false?) >> >&
python memory use
The chart at http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=javasteady&lang2=python&box=1 is very interesting to me because it shows CPython using much less memory than Java for most tests. I'd be interested in knowing whether anybody can share info about how representative those test results are. For instance, suppose we're talking about a huge dictionary that maps integers to lists of integers (something I use in my code). Would something like that really take up much more memory in Java (using the closest equivalent Java data structures) than in CPython? I find it hard to believe that that would be the case, but I'm quite curious. (I could test the particular case I mention, but I'm wondering if someone has some fundamental knowledge that would lead to a basic understanding.) -- Gary Robinson CTO Emergent Music, LLC personal email: [email protected] work email: [email protected] Company: http://www.flyfi.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
collections use __next__() in python 2.6?
The Python 2.6.4 docs for collections at http://docs.python.org/library/collections.html say that __next__() is an abstract method for the Iterable ABC. But my understanding is that __next__() isn't supposed to be used until Python 3. Also, I'm using the Mapping ABC, which inherits from Iterable, and it doesn't seem to work if I define __next__(); I am not seeing problems if I define next() instead. What am I missing? -- Gary Robinson CTO Emergent Music, LLC personal email: [email protected] work email: [email protected] Company: http://www.flyfi.com Blog:http://www.garyrobinson.net -- http://mail.python.org/mailman/listinfo/python-list
pickling question
When you define a class in a script, and then pickle instances of that class in
the same script and store them to disk, you can't load that pickle in another
script. At least not the straightforward way
[pickle.load(file('somefile.pickle'))]. If you try it, you get an
AttributeError during the unpickling operation.
There is no problem, of course, if the class is defined in a module which is
imported by the pickling script. pickle.load(file('somefile.pickle')) then
works.
Rather than provide specific examples here, there's a blog post from 2005 that
discusses this issue in depth and presents the problem very well:
http://stefaanlippens.net/pickleproblem. (I tested in Python 2.6 yesterday and
the same issue persists.)
Questions:
1) Does this have to be the case, or is it a design problem with pickles that
should be remedied?
2) Is there an easier way around it than moving the class definition to a
separate module? The blog post I point to above suggests putting "__module__ =
os.path.splitext(os.path.basename(__file__))[0]" into the class definiton, but
that's not working in my testing because when I do that, the pickling operation
fails. Is there something else that can be done?
This is obviously not a huge problem. Substantial classes should usually be
defined in a separate module anyway. But sometimes it makes sense for a script
to define a really simple, small class to hold some data, and needing to create
a separate module just to contain such a class can be a little annoying.
--
Gary Robinson
CTO
Emergent Music, LLC
personal email: [email protected]
work email: [email protected]
Company: http://www.flyfi.com
Blog:http://www.garyrobinson.net
--
http://mail.python.org/mailman/listinfo/python-list
Re: pickling question
Many thanks for the responses I've received here to my question (below).
After reading the responses, I understand what the problem is much better. In
addition to the solutions mentioned in the responses, now that I understand the
problem I'll offer up my own solution. The following is an executable script
named pick1.py:
===
import pickle
class A(object):
def __init__(self, x):
self.x = x
def writePickle():
import pick1
a5 = pick1.A(5)
f = open('pick1.pickle', 'wb')
pickle.dump(a5, f)
f.close()
writePickle() # The dumped pickle can be read by any other script.
That is, we need to do the pickling in a context where the module name for the
class is "pick1" rather than "__main__". The example above allows us to do that
without changing __name__ or doing anything else of that nature.
Thanks again!
Gary
> When you define a class in a script, and then pickle instances of
> that class in the same script and store them to disk, you can't load
> that pickle in another script. At least not the straightforward way
> [pickle.load(file('somefile.pickle'))]. If you try it, you get an
> AttributeError during the unpickling operation.
>
> There is no problem, of course, if the class is defined in a module
> which is imported by the pickling script.
> pickle.load(file('somefile.pickle')) then works.
>
> Rather than provide specific examples here, there's a blog post from
> 2005 that discusses this issue in depth and presents the problem very
> well: http://stefaanlippens.net/pickleproblem. (I tested in Python
> 2.6 yesterday and the same issue persists.)
>
> Questions:
>
> 1) Does this have to be the case, or is it a design problem with
> pickles that should be remedied?
>
> 2) Is there an easier way around it than moving the class definition
> to a separate module? The blog post I point to above suggests putting
> "__module__ = os.path.splitext(os.path.basename(__file__))[0]" into
> the class definiton, but that's not working in my testing because
> when I do that, the pickling operation fails. Is there something else
> that can be done?
>
> This is obviously not a huge problem. Substantial classes should
> usually be defined in a separate module anyway. But sometimes it
> makes sense for a script to define a really simple, small class to
> hold some data, and needing to create a separate module just to
> contain such a class can be a little annoying.
--
Gary Robinson
CTO
Emergent Music, LLC
personal email: [email protected]
work email: [email protected]
Company: http://www.flyfi.com
Blog:http://www.garyrobinson.net
--
http://mail.python.org/mailman/listinfo/python-list
