generator object, next method
>>> gen = iterator() >>> gen.next >>> gen.next >>> gen.next >>> gen.next >>> gen.next is gen.next False What is behind this apparently strange behaviour? (The .next method seems to alternately bind to two different objects) Sw. -- http://mail.python.org/mailman/listinfo/python-list
Re: python profiling, hotspot and strange execution time
> OK - first of all, as someone else has asked, what platform are you
> running? I'm assuming it's windows since you're referring to
> time.clock() and then later saying "wall clock".
Actually, no. I am working on a x86 linux (HT disabled for this
testing, as I thought it may introduce some subtilities). I am not sure
aht you mean by wall clock ?
>
> Next, what are you hoping that the profiler will give you? If you're
> expecting it to give you the big picture of your application's
> performance and give you "real runtime numbers", you may be
> disappointed. It is a deterministic profiler and will give you CPU time
> spent in different areas of code rather than and overall "how long did
> this thing take to run?".
I am not sure to understand the big difference between "time spent in
different areas of code" and "how long did this thing take to run?".
Looking at python doc for deterministic profiling, I understand the
implementation difference, and the performance implications, but I
don't see why deterministic profiling would not give me an overall
picture ?
> Well, let's just add more confusion to the pot, shall we? Look at this
> example (a slight hack from yours)::
>
> import time
> import hotshot
> import hotshot.stats
>
>
> def run_foo():
> print time.clock()
> print time.time()
>
> time.sleep(5)
>
> print time.clock()
> print time.time()
>
> prof = hotshot.Profile("essai.prof")
> benchtime= prof.runcall(run_foo)
> prof.close()
> stats = hotshot.stats.load("essai.prof")
> stats.strip_dirs()
> stats.sort_stats('time', 'calls')
> stats.print_stats(20)
>
> and the output::
>
> 0.24
> 1126011669.55
> 0.24
> 1126011674.55
> 1 function calls in 0.000 CPU seconds
>
>Ordered by: internal time, call count
>
>ncalls tottime percall cumtime percall filename:lineno(function)
> 10.0000.0000.0000.000 tmphQNKbq.py:6(run_foo)
> 00.000 0.000 profile:0(profiler)
>
>
>
> I inserted a time.time() call since I'm on Linux and time.clock()
> returns a process's CPU time and wanted to show the "wall clock time" as
> it were. So, the stats show 0 time taken, whereas time.time() shows 5
> seconds. It's because the time.sleep() took a negligable amount of CPU
> time which is what the profiler looks at.
Well, your example make actually more sense to me :) I understand the
difference between CPU time and time spent in the python code (even if
I was not clear in my previous post about it...). But this case does
not apply to my code, as my code is never "idled", takes 100 % of the
cpu, with no other CPU consuming task
> I would attribute the wall clock and profile time difference to the
> overhead of hotshot. While hotshot is miles better than the "regular"
> profiler, it can still take a little time to profile code.
Well, if hotshot reported a timing which is longer than the execution
time without it, I would have considered that to be normal. Even in C,
using gprof has a non negligeable overhead, most of the time.
What I don't understand is why hotshot reports that do_foo is executed
in 2 seconds whereas it effectively takes more than 10 seconds ? Is it
because I don't understand what deterministic profiling is about ?
David
--
http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
[EMAIL PROTECTED] wrote:
gen = iterator()
gen.next
>
Behind the scene, gen.next is bound to _, i. e. it cannot be
garbage-collected. Then...
gen.next
>
a new method wrapper is created and assigned to _, and the previous method
wrapper is now garbage-collected. The memory location is therefore
available for reuse for...
gen.next
>
yet another method wrapper -- and so on.
gen.next
>
gen.next is gen.next
> False
>
>
> What is behind this apparently strange behaviour? (The .next method
> seems to alternately bind to two different objects)
But it isn't. What seems to be the same object are distinct objects at the
same memory location. See what happens if you inhibit garbage-collection by
keeping a reference of the method wrappers:
>>> it = iter("")
>>> [it.next for _ in range(5)]
[, , , , ]
Peter
--
http://mail.python.org/mailman/listinfo/python-list
pretty windows installer for py scripts
Any recommendations on a windows packager/installer that's free? I need it to allow non-tech users to install some python scripts... you know, "Click Next"... "Click Next"... "Click Finish"... "You're Done!" and everything just magically works ;) -- http://mail.python.org/mailman/listinfo/python-list
Re: CGI File Uploads and Progress Bars
> So, bottom line: Does anyone know how to get the size of the incoming file > data without reading the whole thing into a string? Can I do something with > content_header? http://www.faqs.org/rfcs/rfc1867.html It seems that _maybe_ you can use the content-length http header. But it looks as if it's not mandatory in this case - and of course uploading multiple files will only allow for an overall percentage. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
[EMAIL PROTECTED] wrote: > gen = iterator() gen.next > gen.next > gen.next > gen.next > gen.next is gen.next > False > > > What is behind this apparently strange behaviour? (The .next method > seems to alternately bind to two different objects) It is a combination of factors. 1) Every time you access gen.next you create a new method-wrapper object. 2) Typing an expression at the interactive prompt implicitly assigns the result of the expression to the variable '_' 3) When the method-wrapper is destroyed the memory becomes available to be reused the next time a method-wrapper (or other object of similar size) is created. So in fact you have 6 different objects produced by your 6 accesses to gen.next although (since you never have more than 3 of them in existence at a time) there are probably only 3 different memory locations involved. -- http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
Duncan Booth <[EMAIL PROTECTED]> writes: > 1) Every time you access gen.next you create a new method-wrapper object. Why is that? I thought gen.next is a callable and gen.next() actually advances the iterator. Why shouldn't gen.next always be the same object? -- http://mail.python.org/mailman/listinfo/python-list
Re: determine if os.system() is done
Xah Lee wrote:
> suppose i'm calling two system processes, one to unzip, and one to
> “tail” to get the las
t line. How can i determine when the first
> process is done?
>
> Example:
>
> subprocess.Popen([r"/sw/bin/gzip","-d","access_log.4.gz"]);
>
> last_line=subprocess.Popen([r"/usr/bin/tail","-n 1","access_log.4"],
> stdout=subprocess.PIPE).communicate()[0]
>
> of course, i can try workarounds something like os.system("gzip -d
> thiss.gz && tail thiss"), but i wish to know if there's non-hack way to
> determine when a system process is done.
>
> Xah
> [EMAIL PROTECTED]
> ∑ http://xahlee.org/
>
I know you've found the answer to your question, however for the exact
example you gave a much better solution comes to mind...
import gzip
log_file = gzip.open("access_log.4.gz")
last_line = log_file.readlines()[-1]
log_file.close()
Regards
Martin
--
http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
> Why is that? I thought gen.next is a callable and gen.next() actually > advances the iterator. Why shouldn't gen.next always be the same object? That is, in essence, my question. Executing the below script, rather than typing at a console, probably clarifies things a little. Sw. #--- def iterator(): yield None gen = iterator() #gen.next is bound to x, and therefore, gen.next should not be GC? x = gen.next y = gen.next print x print y print gen.next #--- -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP-able? Expressional conditions
On Wed, 7 Sep 2005 17:53:46 -0500, Terry Hancock <[EMAIL PROTECTED]> wrote: >On Wednesday 07 September 2005 02:44 pm, Paul Rubin wrote: >> Terry Hancock <[EMAIL PROTECTED]> writes: >> > def ternary(condition, true_result, false_result): >> >if condition: >> >return true_result >> >else: >> >return false_result >> > >> > Almost as good, and you don't have to talk curmudgeons into providing >> > it for you. >> >> Not the same at all. It evaluates both the true and false results, >> which may have side effects. > >If you are depending on that kind of nit-picking behavior, >you have a serious design flaw, a bug waiting to bite you, >and code which shouldn't have been embedded in an expression >in the first place. > Or you might know excatly what you are doing ;-) Regards, Bengt Richter -- http://mail.python.org/mailman/listinfo/python-list
reading the last line of a file
Martin Franklin wrote:
> import gzip
> log_file = gzip.open("access_log.4.gz")
> last_line = log_file.readlines()[-1]
> log_file.close()
does the
log_file.readlines()[-1]
actually read all the lines first?
i switched to system call with tail because originally i was using a
pure Python solution
inF = gzip.GzipFile(ff, 'rb');
s=inF.readlines()
inF.close()
last_line=s[-1]
and since the log file is 100 megabytes it takes a long time and hogs
massive memory.
Xah
[EMAIL PROTECTED]
∑ http://xahlee.org/
--
http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
[EMAIL PROTECTED] wrote: >>Why is that? I thought gen.next is a callable and gen.next() actually >>advances the iterator. Why shouldn't gen.next always be the same object? > > > That is, in essence, my question. Because bound methods are generated on the fly - google this group, there have been plenty of discussions about that. Diez -- http://mail.python.org/mailman/listinfo/python-list
RE: pretty windows installer for py scripts
> Any recommendations on a windows packager/installer that's > free? I need it to allow non-tech users to install some > python scripts... you know, "Click Next"... "Click Next"... > "Click Finish"... "You're Done!" and everything just > magically works ;) > > Last time I had to do this, I used NSIS from Nullsoft. Try http://nsis.sf.net HTH Steve ** The information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email. ** -- http://mail.python.org/mailman/listinfo/python-list
Re: generator object, next method
Paul Rubin wrote: > Duncan Booth <[EMAIL PROTECTED]> writes: >> 1) Every time you access gen.next you create a new method-wrapper >> object. > > Why is that? I thought gen.next is a callable and gen.next() actually > advances the iterator. Why shouldn't gen.next always be the same > object? It is a consequence of allowing methods to be first class objects, so instead of just calling them you can also save the bound method in a variable and call it with the 'self' context remembered in the method. It is easier to see what is happening if you look first at ordinary Python classes and instances: >>> class C: def next(self): pass >>> c = C() >>> c.next > >>> C.next Here, 'next' is a method of the class C. You can call the unbound method, but then you have to explicitly pass the 'self' argument. Whenever you access the method through an instance it creates a new 'bound method' object which stores references to both the original function, and the value to be passed in as the first parameter. Usually this object is simply called and discarded, but you can also save it for later use. Python could perhaps bypass the creation of bound method objects when calling a function directly, but it would still need them for cases where the method isn't called immediately (and it isn't obvious it would be an improvement if it tried to optimise this case). It would be possible for a language such as Python to try to either generate these bound method objects in advance (which would be horribly inefficient if you created lots of objects each of which had hundreds of methods which were never called), or to cache bound method objects so as to reuse them (which would be inefficient if you have lots of methods called only once on each object). Python chooses to accept the hit of creating lots of small objects, but tries to make the overhead of this as low as possible (which is one reason the memory gets reused immediately). The next method in a generator works in the same way as bound methods, although the actual types involved are C coded. You can still access both the bound and unbound forms of the next method. The bound form carries the information about the first parameter, and the unbound form has to be given that information: >>> gen = iterator() >>> gen.next >>> type(gen).next >>> gen.next() 'hi' >>> type(gen).next(gen) 'hi' -- http://mail.python.org/mailman/listinfo/python-list
popen in thread on QNX
I am still in the process of creating my script which will run command received from socket. My scripts works perfectly on Linux, but doesn't work on QNX! File "/usr/lib/python2.4/popen2.py", line 108, in __init__ self.pid = os.fork() OSError: [Errno 89] Function not implemented When I try to use os.popen3 - it works. But when I try to use it in new thread - I see that error message. Do you see any solution? This script must work on QNX, command must be on thread, because I need to stop it after timeout. I need popen to see stdout and stderr. Any ideas? -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Xah Lee wrote:
> Martin Franklin wrote:
>
>>import gzip
>>log_file = gzip.open("access_log.4.gz")
>>last_line = log_file.readlines()[-1]
>>log_file.close()
>
>
> does the
> log_file.readlines()[-1]
> actually read all the lines first?
Yes I'm afraid it does.
>
> i switched to system call with tail because originally i was using a
> pure Python solution
>
> inF = gzip.GzipFile(ff, 'rb');
> s=inF.readlines()
> inF.close()
> last_line=s[-1]
>
> and since the log file is 100 megabytes it takes a long time and hogs
> massive memory.
>
Ok, in that case stick to your shell based solution, although 100
megabytes does not sound that large to me I guess it is relative
to the system you are running on :) (I have over a gig of memory here)
> Xah
> [EMAIL PROTECTED]
> ∑ http://xahlee.org/
>
--
http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
It works when I use os.system() instead os.popen3(), but with os.system() I have no access to stdout and stderr :-( -- http://mail.python.org/mailman/listinfo/python-list
Re: Django Vs Rails
I actually like the framework to reflect on my database. I am more of a visual person. I have tools for all my favorite databases that allow me to get a glance of ER diagrams and I would rather develop my data models in these tools rather than in code. Further more I rather like the idea of parsimonious use of code (which is probably why I use Python in the first place) and do not really like manually specifying data schemas in code as much as possible. Is some familiar with a Python Framework that builds by reflection. -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Martin Franklin wrote:
> Xah Lee wrote:
>
>>Martin Franklin wrote:
>>
>>
>>>import gzip
>>>log_file = gzip.open("access_log.4.gz")
>>>last_line = log_file.readlines()[-1]
>>>log_file.close()
>>
>>
>>does the
>>log_file.readlines()[-1]
>>actually read all the lines first?
>
>
>
> Yes I'm afraid it does.
>
>
>
>>i switched to system call with tail because originally i was using a
>>pure Python solution
>>
>> inF = gzip.GzipFile(ff, 'rb');
>> s=inF.readlines()
>> inF.close()
>> last_line=s[-1]
>>
>>and since the log file is 100 megabytes it takes a long time and hogs
>>massive memory.
>>
>
>
> Ok, in that case stick to your shell based solution, although 100
> megabytes does not sound that large to me I guess it is relative
> to the system you are running on :) (I have over a gig of memory here)
>
And just a few minutes after I sent that... this...
import gzip
logfile = gzip.open("access_log.4.BIG.gz")
## seek relative to the end of the file
logfile.seek(-500)
last_line = logfile.readlines()[-1]
logfile.close()
print last_line
Works quite fast on my machine...
Regards
Martin
--
http://mail.python.org/mailman/listinfo/python-list
Re: dual processor
Paul Rubin wrote: > Jorgen Grahn <[EMAIL PROTECTED]> writes: > > I feel the recent SMP hype (in general, and in Python) is a red herring. Why > > do I need that extra performance? What application would use it? > > How many mhz does the computer you're using right now have? When did > you buy it? Did you buy it to replace a slower one? If yes, you must > have wanted more performance. Just about everyone wants more > performance. That's why mhz keeps going up and people keep buying > faster and faster cpu's. > > CPU makers seem to be running out of ways to increase mhz. Their next > avenue to increasing performance is SMP, so they're going to do that > and people are going to buy those. Just like other languages, Python > makes perfectly good use of increasing mhz, so it keeps up with them. > If the other languages also make good use of SMP and Python doesn't, > Python will fall back into obscurity. Just to back your point up, here is a snippet from theregister about Sun's new server chip. (This is a rumour piece but theregister usually gets it right!) Sun has positioned Niagara-based systems as low-end to midrange Xeon server killers. This may sound like a familiar pitch - Sun used it with the much delayed UltraSPARC IIIi processor. This time around though Sun seems closer to delivering on its promises by shipping an 8 core/32 thread chip. It's the most radical multicore design to date from a mainstream server processor manufacturer and arrives more or less on time. It goes on later to say "The physical processor has 8 cores and 32 virtual processors" and runs at 1080 MHz. So fewer GHz but more CPUs is the future according to Sun. http://www.theregister.co.uk/2005/09/07/sun_niagara_details/ -- Nick Craig-Wood <[EMAIL PROTECTED]> -- http://www.craig-wood.com/nick -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP-able? Expressional conditions
Terry Hancock <[EMAIL PROTECTED]> writes: > > Not the same at all. It evaluates both the true and false results, > > which may have side effects. > > If you are depending on that kind of nit-picking behavior, > you have a serious design flaw, a bug waiting to bite you, > and code which shouldn't have been embedded in an expression > in the first place. Are you kidding? Example (imagine a C-like ?: operator in Python): x = (i < len(a)) ? a[i] : None# make sure i is in range Where's the design flaw? Where's the bug waiting to bite? That's a completely normal use of a conditional expression. If the conditional expression works correctly, this does the right thing, as intended. If both results get evaluated, it throws a subscript out of range error, not good. Every time this topic comes up, the suggestions that get put forth are far more confusing than just adding conditional expressions and being done with it. -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Martin Franklin wrote:
> Martin Franklin wrote:
>
>>Xah Lee wrote:
>>
>>
>>>Martin Franklin wrote:
>>>
>>>
>>>
import gzip
log_file = gzip.open("access_log.4.gz")
last_line = log_file.readlines()[-1]
log_file.close()
>>>
>>>
>>>does the
>>>log_file.readlines()[-1]
>>>actually read all the lines first?
>>
>>
>>
>>Yes I'm afraid it does.
>>
>>
>>
>>
>>>i switched to system call with tail because originally i was using a
>>>pure Python solution
>>>
>>>inF = gzip.GzipFile(ff, 'rb');
>>>s=inF.readlines()
>>>inF.close()
>>>last_line=s[-1]
>>>
>>>and since the log file is 100 megabytes it takes a long time and hogs
>>>massive memory.
>>>
>>
>>
>>Ok, in that case stick to your shell based solution, although 100
>>megabytes does not sound that large to me I guess it is relative
>>to the system you are running on :) (I have over a gig of memory here)
>>
>
>
> And just a few minutes after I sent that... this...
>
> import gzip
>
> logfile = gzip.open("access_log.4.BIG.gz")
>
> ## seek relative to the end of the file
> logfile.seek(-500)
>
> last_line = logfile.readlines()[-1]
>
> logfile.close()
>
> print last_line
>
>
> Works quite fast on my machine...
>
whoops, no it doesn't looking at wrong window :( just ignore
me please :)
--
http://mail.python.org/mailman/listinfo/python-list
Re: Job Offer in Paris, France : R&D Engineer (Plone)
Peter Hansen <[EMAIL PROTECTED]> said : > I can't let that pass. :-) I believe it was well established in posts > a few years ago that while the programming-language equivalent of > Esperanto is clearly Python, "Volapuke" was most definitely > reincarnated as *Perl*. Sorry -- I've been reading c.l.py daily for quite some time, but I must have skipped that particular thread :-) Of volapük I know only the name, so if that was the consensus here, then it's certainly true... -- YAFAP : http://www.multimania.com/fredp/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Sockets: code works locally but fails over LAN
Thanks, Bryan, for the details! Btw, the newest oops in the topic's subject is: the code does not work in the case of: sqls_host, sqls_port = '192.168.0.8', 1433 proxy_host, proxy_port = '192.168.0.3', 1434 ## proxy_host, proxy_port = '127.0.0.1', 1434 ## proxy_host, proxy_port = '', 1434 I.e. when both Python and vbs script run on one machine (with ip = 192.168.0.3) and SQL Server runs on another (with ip = 192.168.0.8) How namely it does not work: in the idle window only one line is printed: VB_SCRIPT:. that's all. No errors from Python. After timeout expires I get an error message from VBS (smth like preHandShake() failed; I've never seen it before). I just wonder MUST (or not) it work at all (IN THEORY)? PS: again, without Python vbs and sql server contact with each other PERFECTLY. -- http://mail.python.org/mailman/listinfo/python-list
Migrate PYD Files
Is there a way to use old pyd files (Python 1.5.2) with a newer version of Python without recompiling them? Because the source code is not available anymore, I'm wondering whether it's possible or not to change few bytes with a hex editor (version number?). I'd like to give it a try since the modules don't use too critical functions. Thanks dd -- http://mail.python.org/mailman/listinfo/python-list
Printer List from CUPS
Hi, I want to get the printer list from CUPS. I found some ways using lpstat -p and http://localhost:631/printers but, these ways require some parsing and I am not sure, if the parsing works all the time. A pythonic way would be very helpful. Thanks, Mike -- http://mail.python.org/mailman/listinfo/python-list
Re: Manging multiple Python installation
Jeremy Jones wrote: > Andy Leszczynski wrote: >> >>Is there any way to pass the prefix to the "make install"? Why "make" >>depends on that? >> >>A. >> > > What does it matter? If you *could* pass it to make, what does that buy > you? I'm not a make guru, but I'm not sure you can do this. Someone > else better versed in make will certainly chime in if I'm wrong. But I > think make just looks at the Makefile and does what it's going to do. > If you want different behavior, you edit the Makefile or you get the > Makefile created differently with configure. That way you could install to a different directory without having to rebuild the whole thing. I don't think that uses case happens very often, but I've certainly encountered it (not in relation to Python though). -- If I have been able to see further, it was only because I stood on the shoulders of giants. -- Isaac Newton Roel Schroeven -- http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
rbt wrote: > Any recommendations on a windows packager/installer that's free? I need > it to allow non-tech users to install some python scripts... you know, > "Click Next"... "Click Next"... "Click Finish"... "You're Done!" and > everything just magically works ;) Innosetup (http://www.jrsoftware.org/isinfo.php) is quite good, and it's free (free as in beer, not free as in speech). -- If I have been able to see further, it was only because I stood on the shoulders of giants. -- Isaac Newton Roel Schroeven -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Martin Franklin wrote: > Ok, in that case stick to your shell based solution, although 100 > megabytes does not sound that large to me I guess it is relative > to the system you are running on :) (I have over a gig of memory here) since the file is gzipped, you need to read all of it to get the last line. however, replacing last_line = logfile.readlines()[-1] with a plain for last_line in logfile: pass avoids storing the entire file in memory. (note that seek() actually reads the file in 1024 blocks until it gets to the right location; reading via a line iterator only seems to be marginally slower. zcat|tail is a LOT faster. ymmv.) -- http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
rbt a écrit : > Any recommendations on a windows packager/installer that's free? I need > it to allow non-tech users to install some python scripts... you know, > "Click Next"... "Click Next"... "Click Finish"... "You're Done!" and > everything just magically works ;) Isn't that already available in the distutils module ? -- http://mail.python.org/mailman/listinfo/python-list
Re: Migrate PYD Files
David Duerrenmatt wrote: > Is there a way to use old pyd files (Python 1.5.2) with a newer version > of Python without recompiling them? > Because the source code is not available anymore, I'm wondering whether > it's possible or not to change few bytes with a hex editor (version > number?). I'd like to give it a try since the modules don't use too > critical functions. on windows, Python PYD files are linked against that version of the Python interpreter DLL (python15.dll, python22.dll, etc). you could perhaps try changing any occurence of "python15" to "python23" and see how get far you get... (another approach would be to develop a custom "python15.dll" which implements the necessary operations and maps them to the appropriate Python interpreter DLL (using LoadLibrary/GetProcAddress for "manual" linking. use "dumpbin /exports" on your PYD to see what Python API:s it's using) -- http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
Christophe <[EMAIL PROTECTED]> writes: > > Any recommendations on a windows packager/installer that's free? I need > > it to allow non-tech users to install some python scripts... you know, > > "Click Next"... "Click Next"... "Click Finish"... "You're Done!" and > > everything just magically works ;) > > Isn't that already available in the distutils module ? No not really. There's a thing like that for installing library modules but AFAIK it doesn't make an easy way to install desktop applications with start menu icons and all that kind of stuff. Inno Setup does do those things. -- http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
Paul Rubin a écrit : > Christophe <[EMAIL PROTECTED]> writes: > >>>Any recommendations on a windows packager/installer that's free? I need >>>it to allow non-tech users to install some python scripts... you know, >>>"Click Next"... "Click Next"... "Click Finish"... "You're Done!" and >>>everything just magically works ;) >> >>Isn't that already available in the distutils module ? > > > No not really. There's a thing like that for installing library > modules but AFAIK it doesn't make an easy way to install desktop > applications with start menu icons and all that kind of stuff. Inno > Setup does do those things. I'll take your word on that. If you want to create a fully contained installer, you should use py2exe and on the samples there's an exemple in how to integrate InnoSetup. -- http://mail.python.org/mailman/listinfo/python-list
Re: determine if os.system() is done
On Wed, 07 Sep 2005 23:28:13 -0400, rumours say that Peter Hansen <[EMAIL PROTECTED]> might have written: >Martin P. Hellwig wrote: >> The only thing I am disappointed at his writing style, most likely he >> has a disrupted view on social acceptable behavior and communication. >> These skills might be still in development, so perhaps it is reasonable >> to give him a chance and wait until he is out of his puberty. >He's 37 years old! How long should one be given to mature? I (lots of female friends, actually :) believe many men remain in puberty for longer than that. -- TZOTZIOY, I speak England very best. "Dear Paul, please stop spamming us." The Corinthians -- http://mail.python.org/mailman/listinfo/python-list
Distutils question
How how can I install my .mo files from a distutil script into its default location? sys.prefix + os.sep + 'share' + os.sep + 'locale' -- http://mail.python.org/mailman/listinfo/python-list
Re: Printer List from CUPS
Mike Tammerman wrote: > Hi, > > I want to get the printer list from CUPS. I found some ways using > > lpstat -p and > http://localhost:631/printers > > but, these ways require some parsing and I am not sure, if the parsing > works all the time. A pythonic way would be very helpful. > > Thanks, > Mike > Just for fun I tried this on my Fedora core 4 box [~]$ python Python 2.4.1 (#1, May 16 2005, 15:19:29) [GCC 4.0.0 20050512 (Red Hat 4.0.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pycups >>> so I guess if you are on a redhat based distro there is hope... -- http://mail.python.org/mailman/listinfo/python-list
Re: python profiling, hotspot and strange execution time
[EMAIL PROTECTED] wrote: >I am not sure to understand the big difference between "time spent in >different areas of code" and "how long did this thing take to run?". >Looking at python doc for deterministic profiling, I understand the >implementation difference, and the performance implications, but I >don't see why deterministic profiling would not give me an overall >picture ? > > I think from below you said you were more clear on this. Cool. >Well, your example make actually more sense to me :) I understand the >difference between CPU time and time spent in the python code (even if >I was not clear in my previous post about it...). But this case does >not apply to my code, as my code is never "idled", takes 100 % of the >cpu, with no other CPU consuming task > > > >>I would attribute the wall clock and profile time difference to the >>overhead of hotshot. While hotshot is miles better than the "regular" >>profiler, it can still take a little time to profile code. >> >> > >Well, if hotshot reported a timing which is longer than the execution >time without it, I would have considered that to be normal. Even in C, >using gprof has a non negligeable overhead, most of the time. > > Actually, I'd expect the opposite, but not as extreme for your case. I would expect it to *report* that a piece of code took less time to execute than I *observed* it taking. Reasons in the snipped area above. Unless you're calling a C extension, in which case, IIRC, it's supposed to report the actual execution time of the C call (and I guess plus any overhead that hotshot may cause it to incur) in which case you would be IMO 100% correct. I hope you're not calling a C extension, or my head's gonna explode. >What I don't understand is why hotshot reports that do_foo is executed >in 2 seconds whereas it effectively takes more than 10 seconds ? Is it >because I don't understand what deterministic profiling is about ? > > The profiler is supposed to be smart about how it tracks time spent in execution so it doesn't get readings that are tainted by other processes running or other stuff. I could easily see a 2->10 second disparity if your process were idling somehow. Now, if you're doing a lot of IO, I wonder if the profiler isn't taking into consideration any blocking calls that may max out the CPU in IOWAIT... Are you doing a lot of IO? >David > > > JMJ -- http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
Jacek Popławski wrote: >I am still in the process of creating my script which will run command >received from socket. >My scripts works perfectly on Linux, but doesn't work on QNX! > >File "/usr/lib/python2.4/popen2.py", line 108, in __init__ > self.pid = os.fork() >OSError: [Errno 89] Function not implemented > >When I try to use os.popen3 - it works. But when I try to use it in new >thread - I see that error message. >Do you see any solution? >This script must work on QNX, command must be on thread, because I need >to stop it after timeout. I need popen to see stdout and stderr. >Any ideas? > > os.popen already creates a new process. So what if you try to call os.popen from your main thread, then pass the file descriptors to your thread? It is just an idea... Les -- http://mail.python.org/mailman/listinfo/python-list
Re: Migrate PYD Files
David Duerrenmatt a écrit : > Is there a way to use old pyd files (Python 1.5.2) with a newer version > of Python without recompiling them? No. In general, incrementing the middle version number means that the Python C API has changed in an incompatible manner. There are some exceptions (2.2 modules may work in 2.3) but there's no way you can use a 1.5 .pyd with Python >= 2.2. As far as I know. -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Xah Lee wrote: > i switched to system call with tail because originally i was using a > pure Python solution > > inF = gzip.GzipFile(ff, 'rb'); > s=inF.readlines() > inF.close() > last_line=s[-1] > > and since the log file is 100 megabytes it takes a long time and hogs > massive memory. How about: inF = gzip.GzipFile(ff, 'rb') for line in inF: pass last_line = line It's a bit slower than gzip and tail, but the memory usage is fine. -- http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
Laszlo Zsolt Nagy wrote: > os.popen already creates a new process. So what if you try to call > os.popen from your main thread, then pass the file descriptors to your > thread? > It is just an idea... But I need to run command from thread, that's the main reason to create new thread :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Manging multiple Python installation
Roel Schroeven wrote: >Jeremy Jones wrote: > > > >>Andy Leszczynski wrote: >> >> >>>Is there any way to pass the prefix to the "make install"? Why "make" >>>depends on that? >>> >>>A. >>> >>> >>> >>What does it matter? If you *could* pass it to make, what does that buy >>you? I'm not a make guru, but I'm not sure you can do this. Someone >>else better versed in make will certainly chime in if I'm wrong. But I >>think make just looks at the Makefile and does what it's going to do. >>If you want different behavior, you edit the Makefile or you get the >>Makefile created differently with configure. >> >> > >That way you could install to a different directory without having to >rebuild the whole thing. I don't think that uses case happens very >often, but I've certainly encountered it (not in relation to Python though). > > > I guess I'm still having a hard time understanding "what does it matter?". Even if he reconfigures, he's not going to rebuild the whole thing unless he does a make clean. For example, I just built Python twice, once with a prefix of /usr/local/apps/pytest1 and then with a prefix of /usr/local/apps/pytest2 and timed the compile: BUILD 1: [EMAIL PROTECTED] 7:16AM Python-2.4.1 % cat compile_it.sh ./configure --prefix=/usr/local/apps/pytest1 make make install ./compile_it.sh 107.50s user 9.00s system 78% cpu 2:28.53 total BUILD 2: [EMAIL PROTECTED] 7:18AM Python-2.4.1 % cat compile_it.sh ./configure --prefix=/usr/local/apps/pytest2 make make install ./compile_it.sh 21.17s user 6.21s system 56% cpu 48.323 total I *know* a significant portion of the time of BUILD 2 was spent in configure. So if he's really eager to save a few CPU seconds, he can edit the Makefile manually and change the prefix section. Maybe I'm just a slow file editor, but I would do configure again just because it'd probably be faster for me. Not to mention potentially less error prone. But he's going to have to build something again. Or not. He *should* be able to just tar up the whole directory and it should "just work". I moved /usr/local/apps/pytest1 to /usr/local/apps/pyfoo and imported xml.dom.minidom and it worked. I'm guessing the python binary searches relative to itself first (??). But if I move the python binary to a new location, it doesn't find xml.dom.minidom. JMJ -- http://mail.python.org/mailman/listinfo/python-list
Re: reading the last line of a file
Fredrik Lundh wrote: > zcat|tail is a LOT faster. and here's the "right way" to use that: from subprocess import Popen, PIPE p1 = Popen(["zcat", filename], stdout=PIPE) p2 = Popen(["tail", "-1"], stdin=p1.stdout, stdout=PIPE) last_line = p2.communicate()[0] (on my small sample, this is roughly 15 times faster than the empty for loop) -- http://mail.python.org/mailman/listinfo/python-list
Re: killing thread after timeout
Bryan Olson wrote: > First, a portable worker-process timeout: In the child process, > create a worker daemon thread, and let the main thread wait > until either the worker signals that it is done, or the timeout > duration expires. It works on QNX, thanks a lot, your reply was very helpful! > If we need to run on Windows (and Unix), we can have one main > process handle the socket connections, and pipe the data to and > from worker processes. See the popen2 module in the Python > Standard Library. popen will not work in thread on QNX/Windows, same problem with spawnl currently I am using: os.system(command+">file 2>file2") it works, I just need to finish implementing everything and check how it may fail... One more time - thanks for great idea! -- http://mail.python.org/mailman/listinfo/python-list
Re: launching adobe reader with arguments from os.system call
Thank you Martin, here's what I discovered this morning to work, the
only problem is it is painfully slow to launch the application.
os.system('start acroRd32.exe'+' /A'+' "page=15"'+'
"C:\\Gregtemp\\estelletest\\NexGlosser_User_Guide_W60G00_en.pdf"')
I'm going to give your method a try to see if it launches any quicker.
Thanks again.
Greg Miller
--
http://mail.python.org/mailman/listinfo/python-list
record sound to numarray/list
Hello, do you know any way to record a sound from the soundcard on winXP to a numarray? many thanks Daniel -- http://mail.python.org/mailman/listinfo/python-list
Re: killing thread after timeout
Bryan Olson <[EMAIL PROTECTED]> writes: > First, a portable worker-process timeout: In the child process, > create a worker daemon thread, and let the main thread wait > until either the worker signals that it is done, or the timeout > duration expires. As the Python Library Reference states in > section 7.5.6: Maybe the child process can just use sigalarm instead of a separate thread, to implement the timeout. > If Unix-only is acceptable, we can set up the accepting socket, > and then fork(). The child processes can accept() incomming > connections on its copy of the socket. Be aware that select() on > the process-shared socket is tricky, in that that the socket can > select as readable, but the accept() can block because some > other processes took the connection. To get even more OS-specific, AF_UNIX sockets (at least on Linux) have a feature called ancillary messages that allow passing file descriptors between processes. It's currently not supported by the Python socket lib, but one of these days... . But I don't think Windows has anything like it. No idea about QNX. -- http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
>>os.popen already creates a new process. So what if you try to call >>os.popen from your main thread, then pass the file descriptors to your >>thread? >>It is just an idea... >> >> > >But I need to run command from thread, that's the main reason to create >new thread :) > > Ok, but can't your main thread be a server thread with a queue? Workflow example: - one of your worker threads wants to run a command - it creates the argument list and puts it into a message queue - woker thread starts to sleep - main thread processes the message queue -> it will run popen, put back the file descriptors into the message and wake up the worker thread - the worker thread starts to work with the files Or, if you create the new thread just to interact with that new process, why don't you call popen before you start the thread? Well, of course this would increase the time needed to start up a new worker. Les -- http://mail.python.org/mailman/listinfo/python-list
Re: Is my thread safe from premature garbage collection?
Good point there. Sorry, I should have thought of that myself really, but well, I guess I'm having my senior moments a bit early. :P -- http://mail.python.org/mailman/listinfo/python-list
Re: Creating custom event in WxPython
Cool, thanks a bunch! -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP-able? Expressional conditions
Op 2005-09-07, Terry Reedy schreef <[EMAIL PROTECTED]>: > > "Kay Schluehr" <[EMAIL PROTECTED]> wrote in message > news:[EMAIL PROTECTED] >> No, as I explained it is not a ternary operator and it can't easily be >> implemented using a Python function efficiently because Python does not >> support lazy evaluation. > > By *carefully* using the flow-control operators 'and' and 'or', you can > often get what you want *now*, no PEP required. Which is why I don't understand the resistance against introducing such a beast. Whether it is a ternary operator or something more general, the proposed construction are usually more readable than the python construction with the same functionality. The decorator syntax IMO provided much less improvement in readability for functionality that was already provided and that got implemented. A ternary operator (or suitable generalisation) would IMO provide a greater improvement what readability is concerned but is resisted all the way. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: Distutils question
Laszlo Zsolt Nagy wrote: > How how can I install my .mo files from a distutil script into its > default location? > > sys.prefix + os.sep + 'share' + os.sep + 'locale' I can't answer the first question, but the latter should be written this way instead os.path.join(sys.prefix, 'share', 'locale') for greater portability and maintainability. -Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: Creating custom event in WxPython
You should be derriving from PyCommandEvent since only CommandEvents are set to propegate, which is probably what you want (see the wxwidgets Event handling overview). In order to use Bind you will need an instance of PyEventBinder. For the example below that would be something like: EVT_INVOKE = PyEventBinder(wxEVT_INVOKE) -Chris On Thu, Sep 01, 2005 at 04:25:05PM +0200, fraca7 wrote: > [EMAIL PROTECTED] a ?crit : > > > Now when my socket thread detects an incoming message, I need my main > > thread to interpret the message and react to it by updating the GUI. > > IMO the best way to achieve this is by having my socket thread send a > > custom event to my application's event loop for the main thread to > > process. However, I'm at a total loss as far as creating custom events > > is concerned. The WxWindows documentation isn't very helpful on this > > either. > > I think this is what you're looking for: > > # begin code > > import wx > > wxEVT_INVOKE = wx.NewEventType() > > class InvokeEvent(wx.PyEvent): > def __init__(self, func, args, kwargs): > wx.PyEvent.__init__(self) > self.SetEventType(wxEVT_INVOKE) > self.__func = func > self.__args = args > self.__kwargs = kwargs > > def invoke(self): > self.__func(*self.__args, **self.__kwargs) > > class MyFrame(wx.Frame): > def __init__(self, *args, **kwargs): > wx.Frame.__init__(self, *args, **kwargs) > self.Connect(-1, -1, wxEVT_INVOKE, self.onInvoke) > > def onInvoke(self, evt): > evt.invoke() > > def invokeLater(self, func, *args, **kwargs): > self.GetEventHandler().AddPendingEvent(InvokeEvent(func, args, > kwargs)) > > # end code > > This way, if frm is an instance of MyFrame, invoking > frm.invokeLater(somecallable, arguments...) will invoke 'somecallable' > with the specified arguments in the main GUI thread. > > I found this idiom somewhere on the Web and can't remember where. > > HTH > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Construct raw strings?
Peter Hansen wrote: > Benji York wrote: > >> It's not join that's getting you, it's the non-raw string >> representation in path_to_scan. Use either 'd:\test_images' or >> 'd:\\test_images' instead. > > Benji, you're confusing things: you probably meant r'd:\test_images' > in the above Doh! I did indeed. Thanks for the backup. -- Benji York -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP-able? Expressional conditions
Antoon Pardon wrote:
> Which is why I don't understand the resistance against introducing
> such a beast.
The idea has already been discussed to death. Read PEP 308 to see what was
proposed, discussed, and why the PEP was eventually rejected:
http://www.python.org/peps/pep-0308.html:
> Status: Rejected
> ...
> Requests for an if-then-else ("ternary") expression keep coming up
> on comp.lang.python. This PEP contains a concrete proposal of a
> fairly Pythonic syntax. This is the community's one chance: if
> this PEP is approved with a clear majority, it will be implemented
> in Python 2.4. If not, the PEP will be augmented with a summary
> of the reasons for rejection and the subject better not come up
> again. While the BDFL is co-author of this PEP, he is neither in
> favor nor against this proposal; it is up to the community to
> decide. If the community can't decide, the BDFL will reject the
> PEP.
> ...
> Following the discussion, a vote was held. While there was an
> overall
> interest in having some form of if-then-else expressions, no one
> format was able to draw majority support. Accordingly, the PEP was
> rejected due to the lack of an overwhelming majority for change.
> Also, a Python design principle has been to prefer the status quo
> whenever there are doubts about which path to take.
--
http://mail.python.org/mailman/listinfo/python-list
Re: Migrate PYD Files
David Duerrenmatt wrote:
> Is there a way to use old pyd files (Python 1.5.2) with a newer version
> of Python without recompiling them?
>
> Because the source code is not available anymore, I'm wondering whether
> it's possible or not to change few bytes with a hex editor (version
> number?). I'd like to give it a try since the modules don't use too
> critical functions.
>
> Thanks
> dd
Hi David,
you might checkout embedding your Python 1.5.2 interpreter into Python
using ctypes. Just follow chap.5 of the "Extending and Embedding"
tutorial and the docs of the cytpes API.
The following little script runs successfully on Python23 interpreter.
==
import ctypes
python24 = ctypes.cdll.LoadLibrary("python24.dll")
python24.Py_Initialize()
python24.PyRun_SimpleString("from time import time,ctime\n")
python24.PyRun_SimpleString("print 'Today is',ctime(time())\n");
python24.Py_Finalize()
Kay
--
http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
The elegant way to do installs on Windows would be by creating an MSI. Microsoft provides (IIRC) a simple tool to create those (Orca), that's free. Without the Installshield or Wise tools I think it would take quite some effort though, because you need to understand all the details of how msi tables work. You might be able to get the idea by using orca to look at other msi installs, and figure out how they do things. MSI itself isn't very difficult, but it'll take some time to understand it. If you do use it, do not foget to define uninstall actions. The izfree tools and tutorials on sourceforge might do the trick for you: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/msi/setup/windows_installer_development_tools.asp http://izfree.sourceforge.net/ -- http://mail.python.org/mailman/listinfo/python-list
ANN: PyDev 0.9.8.1 released
Hi All, PyDev - Python IDE (Python Development Enviroment for Eclipse) version 0.9.8.1 has been released. Check the homepage (http://pydev.sourceforge.net/) for more details. Details for Release: 0.9.8.1 Major highlights: --- * Java 1.4 support reintroduced. * Styles added for syntax highlighting (bold and italic), contributed by Gerhard Kalab. Others that are new and noteworthy: - * zombie process after exiting eclipse should not happen anymore * paths with '.' are accepted for the pythonpath (unless they start with a '.', because it may not accept relative paths). * relative imports are added to code-completion * local imports are taken into consideration when doing code completion * debugger has 'change support', so, changed variables in a scope appear red Cheers, Fabio -- Fabio Zadrozny -- Software Developer ESSS - Engineering Simulation and Scientific Software www.esss.com.br PyDev - Python Development Enviroment for Eclipse pydev.sf.net pydev.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
Video display, frame rate 640x480 @ 30fps achievable?
Hi, I need to develop an application that displays video 640x480 16-bit per pixel with 30 fps. I would prefer to do that with Python (wxPython) but don't have any experience whether it is possible to achieve that frame rate and still have some resources for other processing left? My development PC would be a Celeron 1 GHz. The final system could be a faster system. I would appreciate if anybody could share some experience in that field. Thanks for any help. Guenter -- http://mail.python.org/mailman/listinfo/python-list
Re: Printer List from CUPS
I am using Ubuntu. pycups seems to be not existed any more. Mike -- http://mail.python.org/mailman/listinfo/python-list
Re: killing thread after timeout
Paul Rubin wrote: > Maybe the child process can just use sigalarm instead of a separate > thread, to implement the timeout. Already tried that, signals works only in main thread. > To get even more OS-specific, AF_UNIX sockets (at least on Linux) have > a feature called ancillary messages that allow passing file > descriptors between processes. It's currently not supported by the > Python socket lib, but one of these days... . But I don't think > Windows has anything like it. No idea about QNX. I have solved problem with additional process, just like Bryan Olson proposed. Looks like all features I wanted are working... :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Video display, frame rate 640x480 @ 30fps achievable?
Guenter wrote: > I need to develop an application that displays video 640x480 16-bit per > pixel with 30 fps. > > I would prefer to do that with Python (wxPython) but don't have any > experience whether it is possible to achieve that frame rate and still > have some resources for other processing left? My development PC would > be a Celeron 1 GHz. The final system could be a faster system. At the very least, you should be looking at Pygame instead, as wxPython is not really intended for that kind of thing. Whether or not you can manage the desired frame rate depends entirely on what you will be displaying... a single pixel moving around, full-screen video, or something in between? ;-) See for example http://mail.python.org/pipermail/python-list/2002-May/106546.html for one first-hand report on frame rates possible with Pygame (whether it's accurate or not I don't know). -Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: Printer List from CUPS
Mike Tammerman wrote: > I am using Ubuntu. pycups seems to be not existed any more. > > Mike > Yeah as I said if you're using a redhat based distro... However you could try getting the redhat / fedora rpm that provides pycups and installing it? I would ask on the Ubuntu list, I know they are a very python friendly bunch :) Martin -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP-able? Expressional conditions
Op 2005-09-08, Duncan Booth schreef <[EMAIL PROTECTED]>:
> Antoon Pardon wrote:
>
>> Which is why I don't understand the resistance against introducing
>> such a beast.
>
> The idea has already been discussed to death. Read PEP 308 to see what was
> proposed, discussed, and why the PEP was eventually rejected:
So what? If the same procedure would have been followed concerning
the decorator syntax, it would have been rejected too for the same
reasons.
> http://www.python.org/peps/pep-0308.html:
>> Status: Rejected
>> ...
>> Requests for an if-then-else ("ternary") expression keep coming up
>> on comp.lang.python. This PEP contains a concrete proposal of a
>> fairly Pythonic syntax. This is the community's one chance: if
>> this PEP is approved with a clear majority, it will be implemented
>> in Python 2.4. If not, the PEP will be augmented with a summary
>> of the reasons for rejection and the subject better not come up
>> again. While the BDFL is co-author of this PEP, he is neither in
>> favor nor against this proposal; it is up to the community to
>> decide. If the community can't decide, the BDFL will reject the
>> PEP.
>> ...
>> Following the discussion, a vote was held. While there was an
>> overall
>> interest in having some form of if-then-else expressions, no one
>> format was able to draw majority support. Accordingly, the PEP was
>> rejected due to the lack of an overwhelming majority for change.
>> Also, a Python design principle has been to prefer the status quo
>> whenever there are doubts about which path to take.
IMO this is worded in a misleading way. It wasn't that there was lack
of an overwhelming majority for change. 436 supported at least one
of the options and only 82 rejected all options. So it seems 436 voters
out of 518 supported the introduction of a ternary operator.
Yes no format was able to draw majority support but that is hardly
suprising since there where 17 formats to choose from. I find
the fact that people were unable to choose one clear winner out
of 17 formats in one vote, being worded as: "lack of an overwhelming
majority for change" at the least not very accurate and probably
misleading or dishonest.
--
Antoon Pardon
--
http://mail.python.org/mailman/listinfo/python-list
Re: launching adobe reader with arguments from os.system call
Greg Miller wrote:
> Thank you Martin, here's what I discovered this morning to work, the
> only problem is it is painfully slow to launch the application.
>
> os.system('start acroRd32.exe'+' /A'+' "page=15"'+'
> "C:\\Gregtemp\\estelletest\\NexGlosser_User_Guide_W60G00_en.pdf"')
>
> I'm going to give your method a try to see if it launches any quicker.
> Thanks again.
>
> Greg Miller
>
You might notice that if you already have an Acrobat Reader window open
the document comes up rather more quickly.
If you want fast document startup you could consider using the win32all
extensions to create an AcroReader application process in advance of
opening any documents.
regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/
--
http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
Laszlo Zsolt Nagy wrote:
> - one of your worker threads wants to run a command
> - it creates the argument list and puts it into a message queue
> - woker thread starts to sleep
> - main thread processes the message queue -> it will run popen, put back
> the file descriptors into the message and wake up the worker thread
> - the worker thread starts to work with the files
Just like I wrote in previous thread ("killing thread after timeout") -
I am writting application which read command from socket and run it. I
need to be sure, that this command will not take too much time, so I
need some kind of timeout, and I need to see its stdout and stderr.
So I run command on separate thread, this thread calls
os.system(command), main thread takes care about timeout.
popen or fork/execve would be much better than os.system, but they both
fail on QNX, because os.fork() is not implemented in thread
--
http://mail.python.org/mailman/listinfo/python-list
ANN: ConfigObj 4.0.0 Beta 4
Hello Pythoneers, ConfigObj 4.0.0 has a beta 4 release. This fixes a couple of moderately serious bugs - so it's worth switching to. http://cheeseshop.python.org/pypi/ConfigObj http://www.voidspace.org.uk/python/configobj.html http://www.voidspace.org.uk/cgi-bin/voidspace/downman.py?file=configobj-4.0.0b4.zip http://sf.net/projects/configobj What's New ? (Since Beta 2 the last announced version) 2005/09/07 -- Fixed bug in initialising ConfigObj from a ConfigObj. Changed the mailing list address. 2005/09/03 -- Fixed bug in ``Section__delitem__`` oops. 2005/08/28 -- Interpolation is switched off before writing out files. Fixed bug in handling ``StringIO`` instances. (Thanks to report from "Gustavo Niemeyer" <[EMAIL PROTECTED]>) Moved the doctests from the ``__init__`` method to a separate function. (For the sake of IDE calltips). Beta 3 2005/08/26 -- String values unchanged by validation *aren't* reset. This preserves interpolation in string values. What is ConfigObj ? === ConfigObj is a simple but powerful config file reader and writer: an ini file round tripper. Its main feature is that it is very easy to use, with a straightforward programmer's interface and a simple syntax for config files. It has lots of other features though : * Nested sections (subsections), to any level * List values * Multiple line values * String interpolation (substitution) * Integrated with a powerful validation system o including automatic type checking/conversion o allowing default values o repeated sections * All comments in the file are preserved * The order of keys/sections is preserved * No external dependencies ConfigObj is available under the very liberal BSD License. It addresses *most* of the limitations of ConfigParser as discussed at : http://wiki.python.org/moin/ConfigParserShootout Anything Else ? === ConfigObj stores nested sections which map names to members (effectively dictionaries) with values as lists *or* single items. In association with the validate module it can transparently translate values back into booleans, floats, integers, and of course strings. This makes it ideal for *certain* types of human readable (and writable) data persistence. There is a discussion of this (with accompanying code) at : http://www.voidspace.org.uk/python/articles/configobj_for_data_persistence.shtml -- http://mail.python.org/mailman/listinfo/python-list
Re: Video display, frame rate 640x480 @ 30fps achievable?
Peter Hansen wrote: >> I need to develop an application that displays video 640x480 16-bit per >> pixel with 30 fps. >> >> I would prefer to do that with Python (wxPython) but don't have any >> experience whether it is possible to achieve that frame rate and still >> have some resources for other processing left? My development PC would >> be a Celeron 1 GHz. The final system could be a faster system. > > At the very least, you should be looking at Pygame instead, as wxPython > is not really intended for that kind of thing. Whether or not you can > manage the desired frame rate depends entirely on what you will be > displaying... a single pixel moving around, full-screen video, or > something in between? ;-) no contemporary hardware should have any problem reaching that framerate at that resolution, even if you stick to standard "blit" inter- faces. getting the data into the "blittable" object fast enough may be more of a problem, though. I don't know how good wxPython is in that respect; Tkinter's PhotoImage is probably not fast enough for video, but a more lightweight object like PIL's ImageWin.Dib works just fine (I just wrote a test script that reached ~200 FPS at 1400x900, but my machine is indeed a bit faster than a 1 GHz Celeron). -- http://mail.python.org/mailman/listinfo/python-list
Re: python profiling, hotspot and strange execution time
I was unhappy with both hotshot and the standard python profiler, so
I wrote my own, which may be what you are looking for. I've submitted
it as a patch at:
http://sourceforge.net/tracker/index.php?func=detail&aid=1212837&group_id=5470&atid=305470
It should add a minimum of overhead, give real numbers and also
gives stats on child calls. However, it is not compatible with
the stats module.
You can compile it as a standalone module.
[EMAIL PROTECTED] wrote:
> Hi there,
>
>I have some scientific application written in python. There is a
> good deal of list processing, but also some "simple" computation such
> as basic linear algebra involved. I would like to speed things up
> implementing some of the functions in C. So I need profiling.
>
>I first tried to use the default python profiler, but profiling my
> application multiplies the execution time by a factor between 10 and
> 100 ! So I decided to give a try to hotspot. I just followed the
> example of the python library reference, but I have some strange
> results concerning cpu time. My profiling script is something like the
> following:
>
> def run_foo():
> print time.clock()
>
> function_to_profile()
>
> print time.clock()
>
> prof = hotshot.Profile("essai.prof")
> benchtime= prof.runcall(run_foo)
> prof.close()
> stats = hotshot.stats.load("essai.prof")
> stats.strip_dirs()
> stats.sort_stats('time', 'calls')
> stats.print_stats(20)
>
> The goal is to profile the function function_to_profile(). Running this
> script gives me a CPU executime time of around 2 seconds, whereas the
> difference between the two clock calls is around 10 seconds ! And I
> don't run any other cpu consuming tasks at the same time, so this
> cannot come from other running processes. Is there something perticular
> about hotspot timing I should know ? I am not sure how I can get more
> accurate results with hotspot.
>
> I would appreciate any help,
>
> Thanks
--
http://mail.python.org/mailman/listinfo/python-list
Distutils extension proposal (was: Re: Distutils question)
Peter Hansen wrote: >>How how can I install my .mo files from a distutil script into its >>default location? >> >>sys.prefix + os.sep + 'share' + os.sep + 'locale' >> >> > >I can't answer the first question, but the latter should be written this >way instead > >os.path.join(sys.prefix, 'share', 'locale') > >for greater portability and maintainability. > > Of course. :-) I know that Peter is a big Python guru, and he could not answer the question. I also read the archives in the i18n-sig. There were questions about the same problem and the answer was that there is no standard way to include message files with a distribution. I would like to propose an extension in distutils. Most of the packages contain messages and they should be i18n-ed. The proposal itself contains two parts. 1. We should extend the distutils interface to allow message files to be installed to the default location os.path.join(sys.prefix, 'share', 'locale') 2. Domain names for packages should be somehow standardized, especially in conjunction with PEP 301 (Package Index and Metadata for Distutils). Somehow, the package name and version should identify the message files that can be used. Les -- http://mail.python.org/mailman/listinfo/python-list
job scheduling framework?
Has anyone seen a simple open source job-scheduling framework written in Python? I don't really want to reinvent the wheel. All I need is the ability to set up a series of atomic "jobs" as a "stream", then have the system execute the jobs in the stream one-at-a-time until all the jobs in the stream are complete or one of them reports an error. (If it tied into a really simple grid-style computing farm, that would be worth double points!) -Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: record sound to numarray/list
No there is not. Hey, you could write one though. -- http://mail.python.org/mailman/listinfo/python-list
Re: Video display, frame rate 640x480 @ 30fps achievable?
Guenter wrote: > Hi, > > I need to develop an application that displays video 640x480 16-bit per > pixel with 30 fps. > > I would prefer to do that with Python (wxPython) but don't have any > experience whether it is possible to achieve that frame rate and still > have some resources for other processing left? My development PC would > be a Celeron 1 GHz. The final system could be a faster system. > > I would appreciate if anybody could share some experience in that > field. No first hand experience - but I guess pymedia is what you need. how to combine that with wx? No idea. Diez -- http://mail.python.org/mailman/listinfo/python-list
RE: CGI File Uploads and Progress Bars
Doug Helm wrote:
> I'm writing a CGI to handle very large file uploads.
> I would like to include a progress bar.
> ...I need to know not only the number of
> bytes received, but also the total number of
> incoming bytes. Here's the heart of the code:
>
> while afcommon.True:
> lstrData = lobjIncomingFile.file.read(afcommon.OneMeg)
> if not lstrData:
> break
> lobjFile.write(lstrData)
> llngBytes += long(len(lstrData))
> lobjFile.close()
>
> Assume that lobjIncomingFile is actually a file-type
> element coming from CGI.FieldStorage. It's already
> been tested to ensure that it is a file-type element.
> Also, assume that I've already opened a file on the
> server, referred to by lobjFile (so lobjFile is the
> target of the incoming data).
I took a cursory look through the cgi module (and am trying to remember
what we did for CherryPy*). It seems that, at the time you run the above
code, the uploaded file has already been completely read from the client
and placed into a temporary file. That is, lobjIncomingFile.file.read
does not read from the HTTP request body; it reads from a temporary file
instead.
> If this were a client application opening a file,
> I would just do the following:
>
> import os
> print os.stat('myfile.dat')[6]
>
> But, of course, this isn't a local file. In fact,
> it's not really a file at all.
In fact, it is a file, just a temporary one. See
cgi.FieldStorage.makefile().
> So, bottom line: Does anyone know how to get the
> size of the incoming file data without reading the
> whole thing into a string? Can I do something with
> content_header?
Sure. Subclass cgi.FieldStorage, and override make_file to provide your
own file-like object that you can monitor as its "write" method is
called (see read_binary for the actual upload r/w code). The existing
FieldStorage class places the file size (gleaned from the Content-Length
request header) into self.length.
Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
* See CherryPy's
--
http://mail.python.org/mailman/listinfo/python-list
Re: job scheduling framework?
Chris Curvey wrote: > Has anyone seen a simple open source job-scheduling framework written > in Python? It sounds like BuildBot (http://buildbot.sf.net) might interest you. It's not exactly meant to be a job control system, but it does have some nice functionality. It might be interesting to extend it in the direction you're talking about. -- Benji York -- http://mail.python.org/mailman/listinfo/python-list
Re: killing thread after timeout
Paul Rubin wrote: > To get even more OS-specific, AF_UNIX sockets (at least on Linux) have > a feature called ancillary messages that allow passing file > descriptors between processes. It's currently not supported by the > Python socket lib, but one of these days... . But I don't think > Windows has anything like it. It can be done on on Windows. http://tangentsoft.net/wskfaq/articles/passing-sockets.html -- --Bryan -- http://mail.python.org/mailman/listinfo/python-list
Re: pretty windows installer for py scripts
I second that. NSIS works better than MSI, Inno, or even InstallShield. I highly recommend it. Of course, a second choice is Inno, third is MSI, and last resort is InstallShield. Another option is to make an installer using "AutoIT" but that can get kind of tricky. -- http://mail.python.org/mailman/listinfo/python-list
Re: question from beginner
Thanks Dennis. In effect stringZVEI doesn't remain empty after the
.read method, then the loop is executed 1 time.
How could be a 'while' loop to wait a no empty string from the serial
port?
Dario.
Dennis Lee Bieber ha scritto:
> On 7 Sep 2005 07:14:37 -0700, "dario" <[EMAIL PROTECTED]> declaimed the
> following in comp.lang.python:
>
> > Hi, Im new on phyton programming.
> > On my GPRS modem with embedded Phyton 1.5.2+ version, I have to receive
> > a string from serial port and after send this one enclosed in an
> > e-mail.
> > All OK if the string is directly generated in the code. But it doesn't
> > works if I wait for this inside a 'while' loop. This is the simple
> > code:
> >
> First -- post the real code file would help -- the indentation of
> the first two statements below is wrong.
>
> > global stringZVEI
> >
> This does nothing at the top level -- if only makes sense INSIDE a
> "def" block, where it has the effect of saying "this variable is not
> local to the function"
>
> > while stringZVEI=='':
> > MOD.sleep(10)
>
> There is something wrong with
>
> import time
> time.sleep()
>
>
> > a=SER.send(' sono nel while stringZVEI==st vuota')
> > stringZVEI = SER.readbyte()
>
> #for debug
> print "%2X " % stringZVEI
>
> > a=SER.send(' stringZVEI=')
> > a=SER.send(stringZVEI)
> >
> > MOD and SER are embedded class maked by third part.
> >
> > >From my very little debug possibility it seem that loop is executed 1
> > time only nevertheless stringZVEI is still empty. The line
> > a=SER.send(' stringZVEI=')
> > work correctly but
> >
> > a=SER.send(stringZVEI)
> >
> What does .readbyte() do if there is no data to be read? Since your
> loop is based on a totally empty string, if .readbyte returns /anything/
> (even a "null" byte -- 0x00) your loop will exit; and a null byte may
> not be visible on the send...
>
> --
> > == <
> > [EMAIL PROTECTED] | Wulfraed Dennis Lee Bieber KD6MOG <
> > [EMAIL PROTECTED] | Bestiaria Support Staff <
> > == <
> > Home Page: <
> >Overflow Page: <
--
http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
> ... and let me reveal the secret: > http://spoj.sphere.pl/problems /SUPPER/ your question is different than the question on this website. also, what do you consider to be the correct output for this permutation? (according to your original question) [4, 5, 1, 2, 3, 6, 7, 8] Manuel -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
So, this has no real world use, aside from posting it on a website. Thanks for wasting our time. You are making up an arbitrary problem and asking for a solution, simply because you want to look at the solutions, not because your problem needs to be solved. Clearly, this is a waste of time. -- http://mail.python.org/mailman/listinfo/python-list
Re: popen in thread on QNX
spawn() works on QNX, fork() does not. -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help with C extension module
Ok, I found further examples on the Internet and got something working (it seems), but I have a question about the memory management. The example I found did not include any of the PyMem_... functions. Here's roughly what I have working: cdef extern from "my.h": cdef struct inputs: char *x cdef struct outputs: int y outputs *func(inputs *x, outputs *y) int init(char* fname) class Analyzer: def __init__(self, fname): init(fname) # inp is my python "Inputs" object. def myfunc(self, inp): cdef inputs* i i.x= inp.x cdef outputs* o o = func(i, o) return o.y class Inputs: def __init__(self): self.x = "" So there is no explicit memory management going on there as in Robert's example. Is this ok? Thanks, Chris -- http://mail.python.org/mailman/listinfo/python-list
Help! Python either hangs or core dumps when calling C malloc
Hi Everyone! I've been trying to figure out this weird bug in my program. I have a python program that calls a C function that reads in a binary file into a buffer. In the C program, buffer is allocated by calling malloc. The C program runs perfectly fine but when I use python to call the C function, it core dumps at malloc. I've tried multiple binary files with different sizes and the result is: if file size is < 20 bytes , works fine if file size is > 20 bytes, it hangs or core dumps. Please help!! Lil -- http://mail.python.org/mailman/listinfo/python-list
Re: launching adobe reader with arguments from os.system call
Thank you for the information, when I launched the Reader on the actual hardware it launched quickly. I think I just have too much running on my application PC. I will consider starting an AcroReader app however. Greg -- http://mail.python.org/mailman/listinfo/python-list
Re: Cleaning strings with Regular Expressions
sheffdog wrote: > Using regular expressions, the best I can do so far is using the re.sub > command but it still takes two lines. Can I do this in one line? Or > should I be approaching this differently? All I want to end up with is > the file name "ppbhat.tga". A regular expression to do what you want: >>> s = 'setAttr ".ftn" -type "string" >>> /assets/chars/boya/geo/textures/lod1/ppbhat.tga";' >>> s = re.sub(r".*/(.*\.tga).*", r"\1", s) >>> s 'ppbhat.tga' Is a regular expression the best solution? That depends on what else you need to do with your data file. -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
Working on this allowed me to avoid some _real_ (boring) work at my job. So clearly it served a very useful purpose! ;) Manuel -- http://mail.python.org/mailman/listinfo/python-list
Re: job scheduling framework?
Google turned up these links that might be of interest: http://www.foretec.com/python/workshops/1998-11/demosession/hoegl/ http://www.webwareforpython.org/Webware/TaskKit/Docs/QuickStart.html http://www.slac.stanford.edu/BFROOT/www/Computing/Distributed/Bookkeeping/SJM/SJMMain.htm Larry Bates Chris Curvey wrote: > Has anyone seen a simple open source job-scheduling framework written > in Python? I don't really want to reinvent the wheel. All I need is > the ability to set up a series of atomic "jobs" as a "stream", then > have the system execute the jobs in the stream one-at-a-time until all > the jobs in the stream are complete or one of them reports an error. > > (If it tied into a really simple grid-style computing farm, that would > be worth double points!) > > -Chris > -- http://mail.python.org/mailman/listinfo/python-list
Re: Django Vs Rails
Hello! On 7 Sep 2005 20:56:28 -0700 flamesrock wrote: > On the other, Rails seems to have a brighter future, Why that? Django is not yet released and everybody is talking about it. Like it happened with RoR. > How difficult would it be to learn Ruby+Rails, assuming that someone is > already skilled with Python? Learning Ruby: quite trivial, as Ruby is like Python, but sometimes there a Ruby-way and a non-Ruby-way (codeblocks and stuff like this) and to write nice Ruby programs you better write in the Ruby-way. > Is it worth it? Well, I've learned it, because it was easy but I haven't yet found a really significant difference that makes Ruby much better than Python. You can write some things in an very elegant way, but sometimes the Python solution is more readable. But the docs are sometimes.. say: very compact ;). As Ruby is not that hard to learn you could give it a try - maybe you'll like it, maybe not. RoR is not the only framework, some folks prefer Nitro. greets, Marek -- http://mail.python.org/mailman/listinfo/python-list
Re: Help! Python either hangs or core dumps when calling C malloc
Question: Why not just use Python to read the file? f=open(filename, 'rb') fcontents=f.read() If you need to manipulate what is in fcontents you can use struct module and/or slicing. Larry Bates Lil wrote: > Hi Everyone! I've been trying to figure out this weird bug in my > program. I have a python program that calls a C function that reads in > a binary file into a buffer. In the C program, buffer is allocated by > calling malloc. The C program runs perfectly fine but when I use python > to call the C function, it core dumps at malloc. > I've tried multiple binary files with different sizes and the result > is: > > if file size is < 20 bytes , works fine > if file size is > 20 bytes, it hangs or core dumps. > > Please help!! > Lil > -- http://mail.python.org/mailman/listinfo/python-list
Django and SQLObject. Why not working together?
Django Model is wonderfull. But SQLObject more flexible (and powerfull, as i think, and has already more db interfaces). But Django Model is tied with Django, and using Django with another OO mapping is not comfortable. Why do not working together? I can't understand. If you (Django and SQLObject) would, you'll beat everyone else in efficiency (i mean ROR and maybe some Python's frameworks). It is for all of Open Source. There a lot of concurent projects. Tons of frameworks, template systems and OO maping tools (in Python), numerous gui libraries (flame war: Qt vs GNOME), and different javascript libraries. Numerous libraries in Perl, that are not present in Python. And there is nothing we could oppose to Windows( XP Prof, 2003) + MSOffice(Open Office is behind a bit) + SQLServer(is debateable) + .NET(Mono is slow a bit : I wrote a C# log parser, it works 4min in Mono and 45 sec in .NET) + ... . Consolidation of good teams and their experience is the future of Open Source. Present is the disunity. Sorry for this boolshit and offlist. "Don't you know?Ev'rything's alright, yes, ev'rything's fine... relax think of nothing tonight." (excuse my english) -- http://mail.python.org/mailman/listinfo/python-list
Re: Help! Python either hangs or core dumps when calling C malloc
Hi Larry, It's in the C code mainly because the buffer is an input to the driver. The driver takes a char* as input and I didn't want to pass a char* from python -> swig -> C since swig has memory leaks passing pointers. Do you think this is a Python issue or a Red Hat issue? I'm going to try it on my windows machine now and see what happens. thanks! Lil -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
Trent> Unfortunately your getting caught by the default logging level Trent> being WARN, so that any log level below that is tossed. Ah, okay. I'll pick back through the docs and see what I missed, then maybe add a description of the minimal steps needed to get going. >> I suspect the naming could be improved while providing backward >> compatibility aliases and deprecating those names. Trent> Do you mean naming like "makeLogRecord" etc? Yes. Trent> I thought PEP 8 said camelCase (or whatever it is called) was Trent> okay? Hmmm... In the section entitled "Naming Conventions" I see: Function Names Function names should be lowercase, possibly with words separated by underscores to improve readability. mixedCase is allowed only in contexts where that's already the prevailing style (e.g. threading.py), to retain backwards compatibility. Method Names and Instance Variables The story is largely the same as with functions: in general, use lowercase with words separated by underscores as necessary to improve readability. Since the logging package currently uses mixedCase it would appear it shouldn't revert to lower_case. I'm thinking it should have probably used lower_case from the start though. I see no real reason to have maintained compatibility with log4j. Similarly, I think PyUnit (aka unittest) should probably have used lower_case method/function names. After all, someone went to the trouble of PEP-8-ing the module name when PyUnit got sucked into the core. Why not the internals as well? I realize I'm playing the devil's advocate here. If a module that's been stable outside the core for awhile gets sucked into Python's inner orbit, gratuitous breakage of the existing users' code should be frowned upon, otherwise people will be hesitant to be early adopters. There's also the matter of synchronizing multiple versions of the module (outside and inside the core). Still, a dual naming scheme with the non-PEP-8 names deprecated should be possible. In the case of the logging module I'm not sure that applies. If I remember correctly, it was more-or-less written for inclusion in the core. In that case it should probably have adhered to PEP 8 from the start. Maybe going forward we should be more adamant about that when an external module becomes a candidate for inclusion in the core. Skip -- http://mail.python.org/mailman/listinfo/python-list
Re: job scheduling framework?
Larry Bates wrote: > Google turned up these links that might be of interest: > > http://www.foretec.com/python/workshops/1998-11/demosession/hoegl/ > http://www.webwareforpython.org/Webware/TaskKit/Docs/QuickStart.html > http://www.slac.stanford.edu/BFROOT/www/Computing/Distributed/Bookkeeping/SJM/SJMMain.htm > > Larry Bates > > > Chris Curvey wrote: >> Has anyone seen a simple open source job-scheduling framework written >> in Python? I don't really want to reinvent the wheel. All I need is >> the ability to set up a series of atomic "jobs" as a "stream", then >> have the system execute the jobs in the stream one-at-a-time until all >> the jobs in the stream are complete or one of them reports an error. >> >> (If it tied into a really simple grid-style computing farm, that would >> be worth double points!) In addition to Larry's links, this might also be of interest: http://directory.fsf.org/science/visual/Pyslice.html Not exactly the same, but I suspect it may be useful. Cheers, f -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
[EMAIL PROTECTED] > So, this has no real world use, aside from posting it on a website. > Thanks for wasting our time. You are making up an arbitrary problem and > asking for a solution, simply because you want to look at the > solutions, not because your problem needs to be solved. Clearly, this > is a waste of time. If investigating algorithms irritates you, ignore it. The people writing papers on this topic don't feel it's a waste of time. For example, http://citeseer.ist.psu.edu/bespamyatnikh00enumerating.html "Enumerating Longest Increasing Subsequences and Patience Sorting (2000)" Sergei Bespamyatnikh, Michael Segal That's easy to follow, although their use of a Van Emde-Boas set as a given hides the most challenging part (the "efficient data structure" part). -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
[EMAIL PROTECTED] wrote] > Trent> I thought PEP 8 said camelCase (or whatever it is called) was > Trent> okay? > > Hmmm... In the section entitled "Naming Conventions" I see: > > Function Names > > Function names should be lowercase, possibly with words separated by > underscores to improve readability. mixedCase is allowed only in > contexts where that's already the prevailing style (e.g. threading.py), > to retain backwards compatibility. > > Method Names and Instance Variables > > The story is largely the same as with functions: in general, use > lowercase with words separated by underscores as necessary to improve > readability. I swear that has changed since I last read that. :) ...checking... Guess I haven't read it in about 2 years. This patch: Sat Mar 20 06:42:29 2004 UTC (17 months, 2 weeks ago) by kbk Patch 919256 Clarify and standardize the format for names of modules, functions, methods, and instance variables. Consistent, I hope, with discussion on python-dev http://mail.python.org/pipermail/python-dev/2004-March/043257.html http://mail.python.org/pipermail/python-dev/2004-March/043259.html Made this change: http://cvs.sourceforge.net/viewcvs.py/python/python/nondist/peps/pep-0008.txt?r1=1.20&r2=1.21 Function Names - Plain functions exported by a module can either use the CapWords - style or lowercase (or lower_case_with_underscores). There is - no strong preference, but it seems that the CapWords style is - used for functions that provide major functionality - (e.g. nstools.WorldOpen()), while lowercase is used more for - "utility" functions (e.g. pathhack.kos_root()). + Function names should be lowercase, possibly with underscores to + improve readability. mixedCase is allowed only in contexts where + that's already the prevailing style (e.g. threading.py), to retain + backwards compatibility. > Since the logging package currently uses mixedCase it would appear it > shouldn't revert to lower_case. I'm thinking it should have probably used > lower_case from the start though. I see no real reason to have maintained > compatibility with log4j. Similarly, I think PyUnit (aka unittest) should > probably have used lower_case method/function names. After all, someone > went to the trouble of PEP-8-ing the module name when PyUnit got sucked into > the core. Why not the internals as well? Perhaps because of the timing. > If I remember > correctly, it was more-or-less written for inclusion in the core. Yah. It was added before Guido more clearly stated that he thought modules should have a successful life outside the core before being accepted in the stdlib. Trent -- Trent Mick [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Help! Python either hangs or core dumps when calling C malloc
"Lil" <[EMAIL PROTECTED]> wrote: > It's in the C code mainly because the buffer is an input to the > driver. The driver takes a char* as input and I didn't want to pass a > char* from python -> swig -> C since swig has memory leaks passing > pointers. > Do you think this is a Python issue or a Red Hat issue? I think we would have noticed by now if Python or Red Hat weren't able to allocate and read 20 bytes. It's a bug in your program, and you should concentrate on fixing it, not looking for bugs everywhere else. (quick guess: did you perhaps type malloc(sizeof(bytes)) instead of malloc(bytes), or something similar) -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
> That's easy to follow, although their use of a Van Emde-Boas set as a > given hides the most challenging part (the "efficient data structure" > part). The "efficient data structure" is the easy part. Obviously, it is a dict of lists. ...or is it a list of dicts?... ...or is it a tuple of generators?... Anyway, "being aware of the literature", "thinking", "being smart", and "being Tim Peters" are all forms of cheating. I prefer not to cheat. ;) nOOm, I still need an answer... > also, what do you consider to be the correct output for this > permutation? (according to your original question) > > [4, 5, 1, 2, 3, 6, 7, 8] Manuel -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
[EMAIL PROTECTED] wrote: > So, this has no real world use, aside from posting it on a website. I don't think you're quite right. We never know where we gain and where we lose. > So clearly it served a very useful purpose! ;) Thanks, Manuel! > your question is different than the question on this website. Not exactly so (maybe I'm wrong here). How I did it (but got TLE - Time Limit Exceeded (which is 9 seconds)). Firstly I find ordering numbers when moving from left to the right; then I find ord. numbers for backward direction AND for DECREASING subsequences: 4 5 1 2 3 6 7 8 << the list itself 1 2 1 2 3 4 5 6 << ordering numbers for forward direction 2 1 6 5 4 3 2 1 << ordering numbers for backward direction === 3 3 7 7 7 7 7 7 << sums of the pairs of ord. numbers Now those numbers with sum_of_ord.pair = max + 1 = 6 + 1 are the answer. So the answer for your sample is: 1 2 3 6 7 8 Btw, I did it in Pascal. Honestly, I don't believe it can be done in Python (of course I mean only the imposed time limit). http://spoj.sphere.pl/status/SUPPER/ -- http://mail.python.org/mailman/listinfo/python-list
Re: List of integers & L.I.S.
PS: I've still not read 2 new posts. -- http://mail.python.org/mailman/listinfo/python-list
Re: Printer List from CUPS
Mike Tammerman wrote: > Hi, > > I want to get the printer list from CUPS. I found some ways using > > lpstat -p and > http://localhost:631/printers > > but, these ways require some parsing and I am not sure, if the parsing > works all the time. A pythonic way would be very helpful. > > Thanks, > Mike > The HPLIP project (hpinkjet.sf.net) includes a basic CUPS extension module in the src/prnt/cupsext directory. Its pretty rough, but it will return a list of CUPS printers easily enough. I am in the process of rewriting it in Pyrex and hope to include more complete CUPS API coverage. -Don -- http://mail.python.org/mailman/listinfo/python-list
