Re: File Closing Problem in 2.3 and 2.4, Not in 2.5 (Final report)

2007-01-10 Thread Marc 'BlackJack' Rintsch
In <[EMAIL PROTECTED]>, Julio Biason
wrote:

> If I use a file() in a for, how to I explicitely close the file?
> 
> 
> for line in file('contents'):
>print line
> 
> 
> Would this work like the new 'with' statement or it will only be closed
> when the GC finds it?

Only when the GC destroys it.

Ciao,
Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bitwise expression

2007-01-10 Thread Hendrik van Rooyen
 "Gigs_" <[EMAIL PROTECTED]> wrote:


> Now is all clearer thanks to [EMAIL PROTECTED] and Hendrick van Rooyen

Contrary to popular belief in the English speaking world - 

>>> "c" in "Hendrik"
False
>>> 

There is no "c" in "Hendrik"

: - )- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Colons, indentation and reformatting. (2)

2007-01-10 Thread Hendrik van Rooyen

 "Jorgen Grahn" <[EMAIL PROTECTED]>wrote:

> On 8 Jan 2007 23:57:29 -0800, Paddy <[EMAIL PROTECTED]> wrote:
> >
> > OK, whilst colons are not sufficient to re-format a completely
> > mis-indented file. I'm thinking that they are sufficient for
> > reformatting most pasted code blocks when refactoring say?
> 
> Let's put it this way: if the formatter can assume the original code is
> valid (i.e. has the intended indentation) then it can do all kinds of nifty
> things to it.

This is true - and I think it will only fail if the "entry" point in the pasted
code is "further to the right" than where it has to fit in to the original
code - i.e. if you "run out of space" to the left.  - but in that case
you really are hacking, and you are in urgent need of some slashing...

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why less emphasis on private data?

2007-01-10 Thread Hendrik van Rooyen
"Steven D'Aprano" <[EMAIL PROTECTED]> wrote:

> On Tue, 09 Jan 2007 10:27:56 +0200, Hendrik van Rooyen wrote:
>
> > "Steven D'Aprano" <[EMAIL PROTECTED]> wrote:
> >
> >
> >> On Mon, 08 Jan 2007 13:11:14 +0200, Hendrik van Rooyen wrote:
> >>
> >> > When you hear a programmer use the word "probability" -
> >> > then its time to fire him, as in programming even the lowest
> >> > probability is a certainty when you are doing millions of
> >> > things a second.
> >>
> >> That is total and utter nonsense and displays the most appalling
> >> misunderstanding of probability, not to mention a shocking lack of common
> >> sense.
> >
> > Really?
> >
> > Strong words.
> >
> > If you don't understand you need merely ask, so let me elucidate:
> >
> > If there is some small chance of something occurring at run time that can
> > cause code to fail - a "low probability" in all the accepted senses of the
> > word - and a programmer declaims - "There is such a low probability of
> > that occurring and its so difficult to cater for that I won't bother"
> > - then am I supposed to congratulate him on his wisdom and outstanding
> > common sense?
> >
> > Hardly. - If anything can go wrong, it will. - to paraphrase Murphy's law.
> >
> > To illustrate:
> > If there is one place in any piece of code that is critical and not
protected,
> > even if its in a relatively rarely called routine, then because of the high
> > speed of operations, and the fact that time is essentially infinite,
>
> Time is essentially infinite? Do you really expect your code will still be
> in use fifty years from now, let alone a billion years?

My code does not suffer from bit rot, so it should outlast the hardware...

But seriously - for the sort of mistakes we make as programmers - it does
not actually need infinite time for the lightning to strike - most things that
will actually run overnight are probably stable - and if it takes say a week
of running for the bug to raise its head - it is normally a very difficult
problem to find and fix. A case in point - One of my first postings to
this newsgroup concerned an intermittent failure on a serial port - It was
never resolved in a satisfactory manner - eventually I followed my gut
feel, made some changes, and it seems to have gone away - but I expect
it to bite me anytime - I don't actually *know* that its fixed, and there is
not, as a corollary to your sum below here, any real way to know for
certain.

>
> I know flowcharts have fallen out of favour in IT, and rightly so -- they
> don't model modern programming techniques very well, simply because modern
> programming techniques would lead to a chart far too big to be practical.

I actually like drawing data flow diagrams, even if they are sketchy, primitive
ones, to try to model the inter process communications (where a "process"
may be just a python thread) - I find it useful to keep an overall perspective.

> But for the sake of the exercise, imagine a simplified flowchart of some
> program, one with a mere five components, such that one could take any of
> the following paths through the program:
>
> START -> A -> B -> C -> D -> E
> START -> A -> C -> B -> D -> E
> START -> A -> C -> D -> B -> E
> ...
> START -> E -> D -> C -> B -> A
>
> There are 5! (five factorial) = 120 possible paths through the program.
>
> Now imagine one where there are just fifty components, still quite a
> small program, giving 50! = 3e64 possible paths. Now suppose that there is
> a bug that results from following just one of those paths. That would
> match your description of "lowest probability" -- any lower and it would
> be zero.
>
> If all of the paths are equally likely to be taken, and the program takes
> a billion different paths each millisecond, on average it would take about
> 1.5e55 milliseconds to hit the bug -- or about 5e44 YEARS of continual
> usage. If every person on Earth did nothing but run this program 24/7, it
> would still take on average almost sixty million billion billion billion
> years to discover the bug.

In something with just 50 components it is, I believe, better to try to
inspect the quality in, than to hope that random testing will show up
errors - But I suppose this is all about design, and about avoiding
doing known no - nos.

>
> But of course in reality some paths are more likely than others. If the
> bug happens to exist in a path that is executed often, or if it exists
> in many paths, then the bug will be found quickly. On the other hand, if
> the bug is in a path that is rarely executed, your buggy program may be
> more reliable than the hardware you run it on. (Cynics may say that isn't
> hard.)

Oh I am of the opposite conviction - Like the fellow of the Circuit Cellar
I forget his name ( Steve Circia (?) ) who said:  "My favourite Programming
Language is Solder"... I find that when I start blaming the hardware
for something that is going wrong, I am seldom right...

And this is true also for hardware that we make ourselves, that 

Re: Parallel Python

2007-01-10 Thread parallelpython
> I always thought that if you use multiple processes (e.g. os.fork) then
> Python can take advantage of multiple processors. I think the GIL locks
> one processor only. The problem is that one interpreted can be run on
> one processor only. Am I not right? Is your ppm module runs the same
> interpreter on multiple processors? That would be very interesting, and
> something new.
>
>
> Or does it start multiple interpreters? Another way to do this is to
> start multiple processes and let them communicate through IPC or a local
> network.

   That's right. ppsmp starts multiple interpreters in separate
processes and organize communication between them through IPC.

   Originally ppsmp was designed to speedup an existent application
which is written in pure python but is quite computationally expensive
(the other ways to optimize it were used too). It was also required
that the application will run out of the box on the most standard Linux
distributions (they all contain CPython).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Xah's Edu Corner: Introduction to 3D Graphics Programing

2007-01-10 Thread John Ersatznom
[EMAIL PROTECTED] wrote:
> And the core folks around the project are either science educators or
> Python folks - there is little C++ expertise currently involved with
> the project.
> 
> The project is looking for help.
> 
> Anyone willing to jump in should perhaps reply here or at:
[snip address]

I see lots of mentions of C++ and Python, but not Java, so suggesting
that anyone reply "here" (i.e. comp.lang.java.programmer) seems
questionable to me.

And what the hell is wrong with my goddam newsserver? Again it
complained that a header was missing (which either should have been
there to start with or not been an error) and then when I added it that
the message was a duplicate (it was a duplicate only if the previous try
had succeeded, but it claimed the previous try had failed).

I'm thinking of ditching aioe. Anyone know of any other public, free
newsservers that permit posting as well as reading? A whole lot of Web
research has failed to turn up any besides aioe. Mind you, I found a lot
of high quality free ones that permit reading only (some with binaries!
not that I need 'em) and at least one that calls itself "free" and says
it permits posting but actually charges an "account setup fee" -- where
are truth in advertising laws when you need them? "Free" doesn't mean
"No monthly payments" or "No recurring payments", it means FREE, as in
NO PAYMENTS AT ALL, MORONS ... :P
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: distutils and ctypes

2007-01-10 Thread [EMAIL PROTECTED]

Robert Kern wrote:
> [EMAIL PROTECTED] wrote:
>
> > So finally, my question is, is there a way to get distutils to simply
> > build a shared library on windows so that I can use ctypes with them???
>
> Not out-of-box, no. The OOF2 project has added a bdist_shlib command which
> should do most of what you want, though. It's somewhat UNIX-oriented, and I
> think it tries to install the shared library to a standard location (e.g.
> /usr/local/lib). You might want to modify it to install the shared library in
> the package so it is easy to locate at runtime.
>
>   http://www.ctcms.nist.gov/oof/oof2/
>   http://www.ctcms.nist.gov/oof/oof2/source/oof2-2.0.1.tar.gz
>
> The code is in the shlib/ subdirectory.
> 

Thank you very much - this looks like exactly what I want.
John

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: distutils and ctypes

2007-01-10 Thread [EMAIL PROTECTED]

Martin v. Löwis wrote:
> Robert Kern schrieb:
> > To which I say that doing the type-checking and error handling is much 
> > easier in
> > Python than using the C API. Add to that the tediousness of the 
> > edit-compile-run
> > cycle of C and the finickiness of refcounting.
>
> Sure: edit-compile-run is tedious, but in the given case, it is
> necessary anyway. Also, refcounting is finicky, but with ctypes,
> you often have to use just as finicky memory management APIs.
> (such as using specific allocation and deallocation routines,
> or having to do memory management at all when in C you would
> do stack allocation).
>
> My main concern with ctypes is that you have to duplicate
> information from the header files, which is error-prone,
> especially when the header files may change (either across
> versions or across target systems).
>

I have looked at: building an extension module, using pyrex and using
ctypes. Initially ctypes won because I was under the impression that to
use pyrex or build an extension module required the same compiler as
python was compiled with. This is not a problem on linux, but under
windows this is much too great a problem for potential users (who will
also need to compile the module). I now have discovered that I can use
mingw32 (without patching) under windows (with python2.5) though I'm
not clear if this is by chance or has been implemented as a new feature
(can't find suitable documentation).

Anyway, at this point I think I will stick with ctypes purely out of
simplicity. My c library is for some legacy lab hardware control. I
only need to call three functions. With ctypes, I can check the data in
python (which I do anyway to validate it) and pass it onto the library
in about 10 lines of code. Writing an extension module I believe would
be much more difficult.

Anyway,
Thanks both for your help.
John

-- 
http://mail.python.org/mailman/listinfo/python-list

Question about compiling.

2007-01-10 Thread Steven W. Orr
I *just* read the tutorial so please be gentle. I created a file called 
fib.py which works very nicely thank you. When I run it it does what it's 
supposed to do but I do not get a resulting .pyc file. The tutorial says I 
shouldn't do anything special to create it. I have machines that have both 
2.4.1 and 2.3.5. Does anyone have an idea what to do?

TIA

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
-- 
http://mail.python.org/mailman/listinfo/python-list


Establishing if an Object is Defined

2007-01-10 Thread bg_ie
Hi,

The following code works -

one = 1
if one == 1:
  ok = 1
print ok

but this does not, without exception -

one = 2
if one == 1:
  ok = 1
print ok

How do I establish before printing ok if it actually exists so as to
avoid this exception?

Thanks for your help,

Barry.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Establishing if an Object is Defined

2007-01-10 Thread Laurent Pointal
[EMAIL PROTECTED] a écrit :
> Hi,
> 
> The following code works -
> 
> one = 1
> if one == 1:
>   ok = 1
> print ok
> 
> but this does not, without exception -
> 
> one = 2
> if one == 1:
>   ok = 1
> print ok
> 
> How do I establish before printing ok if it actually exists so as to
> avoid this exception?

ok = 0
...do the job...

Very simple, failure-proof, no special case.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about compiling.

2007-01-10 Thread Gabriel Genellina

At Tuesday 9/1/2007 14:56, Steven W. Orr wrote:


I *just* read the tutorial so please be gentle. I created a file called
fib.py which works very nicely thank you. When I run it it does what it's
supposed to do but I do not get a resulting .pyc file. The tutorial says I
shouldn't do anything special to create it. I have machines that have both
2.4.1 and 2.3.5. Does anyone have an idea what to do?


Welcome to Python!
When you run a script directly, no .pyc file is generated. Only when 
a module is imported (See section 6.1.2 on the tutorial). And don't 
worry about it...



--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Why less emphasis on private data?

2007-01-10 Thread Gabriel Genellina

At Wednesday 10/1/2007 04:33, Hendrik van Rooyen wrote:


Oh I am of the opposite conviction - Like the fellow of the Circuit Cellar
I forget his name ( Steve Circia (?) ) who said:  "My favourite Programming
Language is Solder"..


Almost right: Steve Ciarcia.


--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Maths error

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
Tim Peters <[EMAIL PROTECTED]> writes:
|> 
|> Huh.  I don't read it that way.  If it said "numbers can be ..." I 
|> might, but reading that way seems to requires effort to overlook the 
|> "decimal" in "decimal numbers can be ...".

I wouldn't expect YOU to read it that way, but I can assure you from
experience that many people do.  What it MEANS is "Numbers with a
short representation in decimal can be represented exactly in decimal
arithmetic", which is tautologous.  What they READ it to mean is
"One advantage of representing numbers in decimal is that they can be
represented exactly", and they then assume that also applies to pi,
sqrt(2), 1/3 

The point is that the "decimal" could apply equally well to the external
or internal representation and, if you aren't fairly clued-up in this
area, it is easy to choose the wrong one.

|> >|>> and how is decimal no better than binary?
|>  
|> >|> Basically, they both lose info when rounding does occur.  For
|> >|> example, 
|> 
|> > Yes, but there are two ways in which binary is superior.  Let's skip
|> > the superior 'smoothness', as being too arcane an issue for this
|> > group,
|> 
|> With 28 decimal digits used by default, few apps would care about this 
|> anyway.

Were you in the computer arithmetic area during the "base wars" of the
1960s and 1970s that culminated with binary winning out?  A lot of very
well-respected numerical analysts said that larger bases led to a
faster build-up of error (independent of the precision).  My limited
investigations indicated that there was SOME truth in that, but it
wasn't a major matter; I never say the matter settled definitively.

|> > and deal with the other.  In binary, calculating the mid-point
|> > of two numbers (a very common operation) is guaranteed to be within
|> > the range defined by those numbers, or to over/under-flow.
|> >
|> > Neither (x+y)/2.0 nor (x/2.0+y/2.0) are necessarily within the range
|> > (x,y) in decimal, even for the most respectable values of x and y.
|> > This was a MAJOR "gotcha" in the days before binary became standard,
|> > and will clearly return with decimal.
|> 
|> I view this as being an instance of "lose info when rounding does 
|> occur".  For example,

No, absolutely NOT!   This is an orthogonal matter, and is about the
loss of an important invariant when using any base above 2.

Back in the days when there were multiple bases, virtually every
programmer who wrote large numerical code got caught by it at least
once, and many got caught several times (it has multiple guises).
For example, take the following algorithm for binary chop:

while 1 :
c = (a+b)/2
if f(x) < y :
if c == b :
break
b = c
else :
if c == a :
break
a = c

That works in binary, but in no base above 2 (assuming that I haven't
made a stupid error writing it down).  In THAT case, it is easy to fix
for decimal, but there are ways that it can show up that can be quite
tricky to fix.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Establishing if an Object is Defined

2007-01-10 Thread Bruno Desthuilliers
[EMAIL PROTECTED] a écrit :
> Hi,
> 
> The following code works -
> 
> one = 1
> if one == 1:
>   ok = 1
> print ok
> 
> but this does not, without exception -
> 
> one = 2

Are you competing for the Most Misleading Name Award(tm) ?-)

> if one == 1:
>   ok = 1
> print ok
> 
> How do I establish before printing ok if it actually exists so as to
> avoid this exception?

The simplest way is to make sure the name will be defined whatever the 
value of the test:

one = 42
# ...
ok = (one == 1)
print ok


As a side note, if you want to check wether a name exists in the current 
namespace, you can use a try/except block:

try:
   toto
   print "toto is defined"
except NameError:
   print "toto is not defined"
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyExcelerator big integer values

2007-01-10 Thread Gacha
Thank you, the repr() function helped me a lot.

v = unicode(values[(row_idx, col_idx)])
if v.endswith('e+12'):
v = repr(values[(row_idx, col_idx)])

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Gabriel Genellina

At Wednesday 10/1/2007 04:38, Paul Sijben wrote:


I have a server in Python 2.5 that generates a lot of threads. It is
running on a linux server (Fedora Core 6).
The server quickly runs out of threads.

  File "/usr/local/lib/python2.5/threading.py", line 434, in start
_start_new_thread(self.__bootstrap, ())
error: can't start new thread

Does anyone know what it going on here and how I can ensure that I have
all the threads I need?


Simply you can't, as you can't have 1 open files at once. 
Computer resources are not infinite.
Do you really need so many threads? Above a certain threshold, the 
program total execution time may increase very quickly.



--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

RE: dynamic library loading, missing symbols

2007-01-10 Thread Ames Andreas
Some random notes below ...

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED]
> g] On Behalf Of [EMAIL PROTECTED]
> Sent: Wednesday, January 10, 2007 1:26 AM
> Subject: dynamic library loading, missing symbols
> 
> The reason for the complication is that I don't have control over how
> the library does dlopen() or how the code that calls dlopen was
> compiled. I am, however, able to control the build process for the
> Boost.Python wrapper and the original C++ code that the Boost.Python
> wraps. I've tried as many linker tricks as I can think of to get this
> to work. Both the boost wrapper and the C++ code that it wraps are
> built using --export-dynamic.

* Make sure the exported symbols are marked "extern C"

* Otherwise (you export C++ symbols), make sure the exporting component uses 
*exactly* the same compiler (version, ABI-influencing flags and all) as the 
importing component.

* IMHO, C++ .sos are principally painful and almost unbearable, if you have 
(non-source) third party components involved.

* Use nm to find the exact names of the exported and imported symbols (as 
already suggested).

> 
> Is there a way to set at runtime what directories or libraries the
> linker should search for these symbols? I have set LD_LIBRARY_PATH
> correctly, but that didn't seem to affect anything.
> 
> For reference, I am running on Gentoo linux 2.6.11.12 with gcc 3.4.4
> 
> I'm interested in any ideas that might help, but the ideal one should
> work on any *nix system and not just linux.

* You might read http://people.redhat.com/drepper/dsohowto.pdf, which I found 
very useful.  There are ways described, how to control the way, the dynamic 
loader resolves symbols, all with their resp. caveats.

* Nevertheless it only describes ELF-based systems, and mostly systems using 
Drepper's own .so-loader.

* Portability to e.g. Windows is a almost impossible for you, because it 
doesn't support undefined symbols in .sos (and Drepper suggests on ELF-based 
systems this should only be used if absolutely unavoidable, IIRC).  NB:  There 
is a project on SF which claimes to provide this feature on Windows, but I 
haven't tried and it limits your choice of tools (http://edll.sf.net/).


cheers,

aa

-- 
Andreas Ames | Programmer | Comergo GmbH |
Voice:  +49 711 13586 7789 | ames AT avaya DOT com
-- 
http://mail.python.org/mailman/listinfo/python-list


convert binary data to int

2007-01-10 Thread rubbishemail
Hello,


I need to convert a 3 byte binary string like
"\x41\x00\x00" to 3 int values ( (65,0,0) in this case).
The string might contain characters not escaped with a \x, like
"A\x00\x00"


Any ideas?


Daniel

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: convert binary data to int

2007-01-10 Thread Peter Otten
[EMAIL PROTECTED] wrote:

> I need to convert a 3 byte binary string like
> "\x41\x00\x00" to 3 int values ( (65,0,0) in this case).
> The string might contain characters not escaped with a \x, like
> "A\x00\x00"

>>> [ord(c) for c in "A\x00\x41"]
[65, 0, 65]

For more complex conversions look into the struct module.

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: convert binary data to int

2007-01-10 Thread Laurent Pointal
[EMAIL PROTECTED] a écrit :
> Hello,
> 
> 
> I need to convert a 3 byte binary string like
> "\x41\x00\x00" to 3 int values ( (65,0,0) in this case).
> The string might contain characters not escaped with a \x, like
> "A\x00\x00"
> 
> 
> Any ideas?

>>> s = "\x41\x00\x00"
>>> [ ord(c) for c in s ]
[65, 0, 0]


-- 
http://mail.python.org/mailman/listinfo/python-list


Is there a way to protect a piece of critical code?

2007-01-10 Thread Hendrik van Rooyen
Hi,

I would like to do the following as one atomic operation:

1) Append an item to a list
2) Set a Boolean indicator

It would be almost like getting and holding the GIL,
to prevent a thread swap out between the two operations.
- sort of the inverted function than for which the GIL
seems to be used, which looks like "let go", get control
back via return from blocking I/O, and then "re - acquire"

Is this "reversed" usage possible?
Is there some way to prevent thread swapping?

The question arises in the context of a multi threaded
environment where the list is used as a single producer,
single consumer queue - I can solve my problem in various
ways, of which this is one, and I am curious as to if it is 
possible to prevent a thread swap from inside the thread.

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: convert binary data to int

2007-01-10 Thread Gabriel Genellina

At Wednesday 10/1/2007 07:17, [EMAIL PROTECTED] wrote:


I need to convert a 3 byte binary string like
"\x41\x00\x00" to 3 int values ( (65,0,0) in this case).
The string might contain characters not escaped with a \x, like
"A\x00\x00"


py> [ord(x) for x in "\x41\x00\x00"]
[65, 0, 0]
py> [ord(x) for x in "A\x00\x00"]
[65, 0, 0]
py> "\x41\x00\x00" == "A\x00\x00"
True
py> "\x41\x00\x00" is "A\x00\x00"
True

(The last test is actually irrelevant, however)


--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: convert binary data to int

2007-01-10 Thread rubbishemail
[ord(x) for ...]
perfect, thank you


Daniel

-- 
http://mail.python.org/mailman/listinfo/python-list


add re module to a embeded device phyton interpreter

2007-01-10 Thread odlfox
   I hav to add the re (regular expressions) functionality to an
Embeded device that I'm using.  I read the re.py file and it says I
need several dependencies, one of them is the pcre module but I found
no pcre.py or pcre.pyc file.  Someone knows where to find something to
solve my problem. Thanks a lot

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to protect a piece of critical code?

2007-01-10 Thread Diez B. Roggisch
Hendrik van Rooyen wrote:

> Hi,
> 
> I would like to do the following as one atomic operation:
> 
> 1) Append an item to a list
> 2) Set a Boolean indicator
> 
> It would be almost like getting and holding the GIL,
> to prevent a thread swap out between the two operations.
> - sort of the inverted function than for which the GIL
> seems to be used, which looks like "let go", get control
> back via return from blocking I/O, and then "re - acquire"
> 
> Is this "reversed" usage possible?
> Is there some way to prevent thread swapping?
> 
> The question arises in the context of a multi threaded
> environment where the list is used as a single producer,
> single consumer queue - I can solve my problem in various
> ways, of which this is one, and I am curious as to if it is
> possible to prevent a thread swap from inside the thread.

There have been discussions to make the GIL available from python code. But
it is considered a implementation detail - and this is the reason you can't
do what you need the way you want to.

Just use a regular lock - in the end, that is what the GIL is anyway.

And besides that, you don't "get" anything by your way, as the
thread-scheduling itself isn't controlled by python - instead the OS
threading implementation is used.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to protect a piece of critical code?

2007-01-10 Thread Gabriel Genellina

At Wednesday 10/1/2007 07:14, Hendrik van Rooyen wrote:


I would like to do the following as one atomic operation:

1) Append an item to a list
2) Set a Boolean indicator


Wouldn't a thread.Lock object serve your purposes?


--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: maximum number of threads

2007-01-10 Thread Felipe Almeida Lessa
On 1/10/07, Gabriel Genellina <[EMAIL PROTECTED]> wrote:
> At Wednesday 10/1/2007 04:38, Paul Sijben wrote:
> >Does anyone know what it going on here and how I can ensure that I have
> >all the threads I need?
>
> Simply you can't, as you can't have 1 open files at once.
> Computer resources are not infinite.
> Do you really need so many threads? Above a certain threshold, the
> program total execution time may increase very quickly.

Maybe Stackless could help the OP?
http://www.stackless.com/

-- 
Felipe.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Paul Sijben
Gabriel Genellina wrote:
> 
> Simply you can't, as you can't have 1 open files at once. Computer
> resources are not infinite.

sure but *how* fast they run out is the issue here

> Do you really need so many threads? 

I might be able to do with a few less but I still need many.

I have done a quick test.

on WinXP I can create 1030 threads
on Fedora Core 6 I can only create 301 (both python2.4 and 2.5)

now the 301 is rather low I'd say.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Paul Sijben
Felipe Almeida Lessa wrote:

> Maybe Stackless could help the OP?
> http://www.stackless.com/
> 

thanks I will look into it!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: convert binary data to int

2007-01-10 Thread Paul Sijben
in some cases struct.unpack would also help

[EMAIL PROTECTED] wrote:
> Hello,
> 
> 
> I need to convert a 3 byte binary string like
> "\x41\x00\x00" to 3 int values ( (65,0,0) in this case).
> The string might contain characters not escaped with a \x, like
> "A\x00\x00"
> 
> 
> Any ideas?
> 
> 
> Daniel
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyExcelerator big integer values

2007-01-10 Thread John Machin

Gacha wrote:
> Thank you, the repr() function helped me a lot.
>
> v = unicode(values[(row_idx, col_idx)])
> if v.endswith('e+12'):
> v = repr(values[(row_idx, col_idx)])

That endswith() looks rather suspicious ... what if it's +11 or +13,
and shouldn't it have a zero in it, like "+012" ??

Here's a possible replacement -- I say possible because you have been
rather coy about what you are actually trying to do.

value = values[(row_idx, col_idx)])
if isinstance(value, float):
v = repr(value)
else:
v = unicode(value)

HTH
John

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: add re module to a embeded device phyton interpreter

2007-01-10 Thread robert
odlfox wrote:
>I hav to add the re (regular expressions) functionality to an
> Embeded device that I'm using.  I read the re.py file and it says I
> need several dependencies, one of them is the pcre module but I found
> no pcre.py or pcre.pyc file.  Someone knows where to find something to
> solve my problem. Thanks a lot
> 

that drills down to a .dll/.so thing depending on installation and python 
version.

Best make a project/test/dummy script using all the things you want - or 
actually your project as it. 
And then use cx-freeze like "FreezePython -OO mystart.py" to let him collect 
all the dependent modules in a dist-folder. Then you don't have to worry about 
all the details yourself.
You can also use UPX and 7zip to further compress the dll's/so's and the 
py-modules zip archive well.


Robert
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to protect a piece of critical code?

2007-01-10 Thread robert
Hendrik van Rooyen wrote:
> Hi,
> 
> I would like to do the following as one atomic operation:
> 
> 1) Append an item to a list
> 2) Set a Boolean indicator


I doubt you have to worry at all about this in such simple single-single queue 
- if there is not a much more complex condition upon the insert order.
And what should the indicator tell? that a new element is there? 

The list itself tells its the length, its guaranteed to be increased _after_ 
.append()
And you can .pop(0) just so - catching/retring at Key/IndexError at least.

List .append() and .pop() will be atomic in any Python though its not mentioned 
explicitely - otherwise it would be time to leave Python.

There is also Queue.Queue - though it has unneccessary overhead for most 
purposes.


A function to block Python interpreter thread switching in such VHL language 
would be nice for reducing the need for spreading locks in some cases (huge 
code - little critical sections). Yet your example is by far not a trigger for 
this. I also requested that once. Implementation in non-C-Pythons may be 
difficult. 


Generally there is also technique for optimistic unprotected execution of 
critical sections - basically using an atomic counter and you need to provide 
code for unrolling half executions. Search Google.


Robert



> It would be almost like getting and holding the GIL,
> to prevent a thread swap out between the two operations.
> - sort of the inverted function than for which the GIL
> seems to be used, which looks like "let go", get control
> back via return from blocking I/O, and then "re - acquire"
> 
> Is this "reversed" usage possible?
> Is there some way to prevent thread swapping?
> 
> The question arises in the context of a multi threaded
> environment where the list is used as a single producer,
> single consumer queue - I can solve my problem in various
> ways, of which this is one, and I am curious as to if it is 
> possible to prevent a thread swap from inside the thread.
> 
> - Hendrik
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to protect a piece of critical code?

2007-01-10 Thread Paul Rubin
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
> I would like to do the following as one atomic operation:
> 
> 1) Append an item to a list
> 2) Set a Boolean indicator

You could do it with locks as others have suggested, but maybe you
really want the Queue module.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Laurent Pointal
Paul Sijben a écrit :
> Gabriel Genellina wrote:
>> Simply you can't, as you can't have 1 open files at once. Computer
>> resources are not infinite.
> 
> sure but *how* fast they run out is the issue here
> 
>> Do you really need so many threads? 
> 
> I might be able to do with a few less but I still need many.
> 
> I have done a quick test.
> 
> on WinXP I can create 1030 threads
> on Fedora Core 6 I can only create 301 (both python2.4 and 2.5)
> 
> now the 301 is rather low I'd say.

This is a system configurable limit (up to a maximum).

See ulimit man pages.

test

ulimit -a

to see what are the current limits, and try with

ulimit -u 2000

to modify the maximum number of user process (AFAIK each thread use a
process entry on Linux)

> 
> Paul
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Felipe Almeida Lessa

On 1/10/07, Laurent Pointal <[EMAIL PROTECTED]> wrote:

This is a system configurable limit (up to a maximum).

See ulimit man pages.

test

ulimit -a

to see what are the current limits, and try with

ulimit -u 2000

to modify the maximum number of user process (AFAIK each thread use a
process entry on Linux)


I don't think it's only this.

---
$ ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
max nice(-e) 20
file size   (blocks, -f) unlimited
pending signals (-i) unlimited
max locked memory   (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) unlimited
max rt priority (-r) unlimited
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) unlimited
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
---

Well, unlimited number user processes. But:

---
$ python test.py
50
100
150
200
250
300
350
Exception raised: can't start new thread

Biggest number of threads: 382
---

The test.py script is attached.

--
Felipe.
from thread import start_new_thread
from time import sleep

def sleeper():
	try:
		while 1:
			sleep(1)
	except:
		if running: raise

def test():
	global running
	n = 0
	running = True
	try:
		while 1:
			start_new_thread(sleeper, ())
			n += 1
			if not (n % 50):
print n
	except Exception, e:
		running = False
		print 'Exception raised:', e
	print 'Biggest number of threads:', n

if __name__ == '__main__':
	test()
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: maximum number of threads

2007-01-10 Thread Jean-Paul Calderone
On Wed, 10 Jan 2007 12:11:59 -0200, Felipe Almeida Lessa <[EMAIL PROTECTED]> 
wrote:
>On 1/10/07, Laurent Pointal <[EMAIL PROTECTED]> wrote:
>>This is a system configurable limit (up to a maximum).
>>
>>See ulimit man pages.
>>
>>test
>>
>> ulimit -a
>>
>>to see what are the current limits, and try with
>>
>> ulimit -u 2000
>>
>>to modify the maximum number of user process (AFAIK each thread use a
>>process entry on Linux)
>
>I don't think it's only this.

Indeed you are correct.  The actual limit you are hitting is the size
of your address space.  Each thread is allocated 8MB of stack.  382
threads consumes about 3GB of address space.  Even though most of this
memory isn't actually allocated, the address space is still used up.  So,
when you try to create the 383rd thread, the kernel can't find anyplace
to put its stack.  So you can't create it.

Try reducing your stack size or reducing the number of threads you create.
There's really actually almost no good reason to have this many threads,
even though it's possible.

[EMAIL PROTECTED]:~$ python Desktop/test.py
50
100
150
200
250
300
350
Exception raised: can't start new thread

Biggest number of threads: 382
[EMAIL PROTECTED]:~$ ulimit -Ss 4096
[EMAIL PROTECTED]:~$ python Desktop/test.py
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
Exception raised: can't start new thread

Biggest number of threads: 764
[EMAIL PROTECTED]:~$

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


An iterator with look-ahead

2007-01-10 Thread Neil Cerutti
For use in a hand-coded parser I wrote the following simple
iterator with look-ahead. I haven't thought too deeply about what
peek ought to return when the iterator is exhausted. Suggestions
are respectfully requested. As it is, you can't be sure what a
peek() => None signifies until the next iteration unless you
don't expect None in your sequence.

Using itertools.tee is the alternative I thought about, but
caveates in the documentation disuaded me.

class LookAheadIter(object):
""" An iterator with the a peek() method, so you can see what's coming next.

If there is no look-ahead, peek() returns None.

>>> nums = [1, 2, 3, 4, 5]
>>> look = LookAheadIter(nums)
>>> for a in look:
...print (a, look.peek())
(1, 2)
(2, 3)
(3, 4)
(4, 5)
(5, None)
"""
def __init__(self, data):
self.iter = iter(data)
self.look = self.iter.next()
self.exhausted = False
def __iter__(self):
return self
def peek(self):
if self.exhausted:
return None
else:
return self.look
def next(self):
item = self.look
try:
self.look = self.iter.next()
except StopIteration:
if self.exhausted:
raise
else:
self.exhausted = True
return item

-- 
Neil Cerutti
We've got to pause and ask ourselves: How much clean air do we really need?
--Lee Iacocca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Internet Survey

2007-01-10 Thread jmfbahciv
In article <[EMAIL PROTECTED]>,
   "Elan Magavi" <[EMAIL PROTECTED]> wrote:
>
><[EMAIL PROTECTED]> wrote in message 
>news:[EMAIL PROTECTED]
>> Hello all,
>>
>> I represent Octabox, an Internet Start-up developing a wide-scale
>> platform for Internet services.  We are very interested to know your
>> thoughts on Internet productivity and how it might be improved, and to
>> that end we have set up a short survey at our website -
>> http://www.octabox.com/productivity_poll.php
>> We would very much appreciate if you would take a couple of minutes to
>> fill it up and influence our direction and empahsis.
>>
>> Thanks in advance,
>> Octabox Development Team
>
>Is that like.. OctaPussy?

I didn't read their stuff.  Are they really trying to put a 
round peg in a square hole?

/BAH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread hg
Jean-Paul Calderone wrote:

> On Wed, 10 Jan 2007 12:11:59 -0200, Felipe Almeida Lessa
> <[EMAIL PROTECTED]> wrote:
>>On 1/10/07, Laurent Pointal <[EMAIL PROTECTED]> wrote:
>>>This is a system configurable limit (up to a maximum).
>>>
>>>See ulimit man pages.
>>>
>>>test
>>>
>>> ulimit -a
>>>
>>>to see what are the current limits, and try with
>>>
>>> ulimit -u 2000
>>>
>>>to modify the maximum number of user process (AFAIK each thread use a
>>>process entry on Linux)
>>
>>I don't think it's only this.
> 
> Indeed you are correct.  The actual limit you are hitting is the size
> of your address space.  Each thread is allocated 8MB of stack.  382
> threads consumes about 3GB of address space.  Even though most of this
> memory isn't actually allocated, the address space is still used up.  So,
> when you try to create the 383rd thread, the kernel can't find anyplace
> to put its stack.  So you can't create it.
> 
> Try reducing your stack size or reducing the number of threads you create.
> There's really actually almost no good reason to have this many threads,
> even though it's possible.
> 
> [EMAIL PROTECTED]:~$ python Desktop/test.py
> 50
> 100
> 150
> 200
> 250
> 300
> 350
> Exception raised: can't start new thread
> 
> Biggest number of threads: 382
> [EMAIL PROTECTED]:~$ ulimit -Ss 4096
> [EMAIL PROTECTED]:~$ python Desktop/test.py
> 50
> 100
> 150
> 200
> 250
> 300
> 350
> 400
> 450
> 500
> 550
> 600
> 650
> 700
> 750
> Exception raised: can't start new thread
> 
> Biggest number of threads: 764
> [EMAIL PROTECTED]:~$
> 
> Jean-Paul


Would increasing the swap size do it also then ?

hg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Internet Survey

2007-01-10 Thread jmfbahciv
In article <[EMAIL PROTECTED]>,
   zoara <[EMAIL PROTECTED]> wrote:
>On 9 Jan 2007 06:58:15 -0800, [EMAIL PROTECTED] wrote:
>
>> Hello all,
>> 
>> I represent Octabox, an Internet Start-up developing a wide-scale
>> platform for Internet services.  We are very interested to know your
>> thoughts on Internet productivity and how it might be improved, and to
>> that end we have set up a short survey at our website -
>> http://www.octabox.com/productivity_poll.php
>> We would very much appreciate if you would take a couple of minutes to
>> fill it up and influence our direction and empahsis.
>
>Well, that was too tempting to pass up. Amusing answers related to dirty
>bastard time-wasting spammers duly entered and submitted.

My hope is that one of these groups of kids will learn from
the nose-wiping service a.f.c. gives them and we get a query
back full of curiosity.  It is ironic that these ads ask
for expert help and then get rude when it's given.

/BAH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic library loading, missing symbols

2007-01-10 Thread [EMAIL PROTECTED]
> Did you verify, using nm -D, that the symbol is indeed present in
> the shared object, not just in the source code?

Yes, the symbol is found in the shared object when using nm.

> What flags are given to that dlopen call?

dlopen(lib, RTLD_NOW | RTLD_GLOBAL);


> No. The dynamic linker doesn't search files to resolve symbols; instead,
> it searches the process' memory. It first loads the referenced shared
> libraries (processing the DT_NEEDED records in each one); that uses
> the LD_LIBRARY_PATH. Then, symbol resolution needs to find everything
> in the libraries that have already been loaded.

Ok, so if the linker is searching the process address space, then I
suppose what really comes into play here is how the Python interpreter
dynamically loaded my module, a module which is in turn calling code
that does the dlopen above. If the Python interpreter is not loading
my library as global, does that mean the linker will not find them
when subsequent libraries are loaded?

> There are various ways to control which subset of in-memory symbols
> the dynamic linker considers: the RTLD_GLOBAL/RTLD_LOCAL flags play
> a role, the -Bsymbolic flag given to the static linker has an impact,
> and so does the symbol visibility (hidden/internal/protected).

Ok, any other suggestions for things to try? Since all of the dlopen
calls happen in 3rd party code, I won't really be able to modify them.
I will try looking into more compile time flags for the linker and see
what I can come up with. Keep in mind that this code works when it is
all C++. Only when I add the Boost.Python wrapper to my code and then
import in Python do I get this symbol error.

Thank you for your help,
~Doug

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Paul Sijben
All thanks for all the input! This was very informative.

Looks like I indeed need stackless as my code benefits from being
concurrently designed.

Paul

Jean-Paul Calderone wrote:
> On Wed, 10 Jan 2007 12:11:59 -0200, Felipe Almeida Lessa
> <[EMAIL PROTECTED]> wrote:
>> On 1/10/07, Laurent Pointal <[EMAIL PROTECTED]> wrote:
>>> This is a system configurable limit (up to a maximum).
>>>
>>> See ulimit man pages.
>>>
>>> test
>>>
>>> ulimit -a
>>>
>>> to see what are the current limits, and try with
>>>
>>> ulimit -u 2000
>>>
>>> to modify the maximum number of user process (AFAIK each thread use a
>>> process entry on Linux)
>>
>> I don't think it's only this.
> 
> Indeed you are correct.  The actual limit you are hitting is the size
> of your address space.  Each thread is allocated 8MB of stack.  382
> threads consumes about 3GB of address space.  Even though most of this
> memory isn't actually allocated, the address space is still used up.  So,
> when you try to create the 383rd thread, the kernel can't find anyplace
> to put its stack.  So you can't create it.
> 
> Try reducing your stack size or reducing the number of threads you create.
> There's really actually almost no good reason to have this many threads,
> even though it's possible.
> 
>[EMAIL PROTECTED]:~$ python Desktop/test.py
>50
>100
>150
>200
>250
>300
>350
>Exception raised: can't start new thread
>   Biggest number of threads: 382
>[EMAIL PROTECTED]:~$ ulimit -Ss 4096
>[EMAIL PROTECTED]:~$ python Desktop/test.py
>50
>100
>150
>200
>250
>300
>350
>400
>450
>500
>550
>600
>650
>700
>750
>Exception raised: can't start new thread
>   Biggest number of threads: 764
>[EMAIL PROTECTED]:~$
>Jean-Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread Fredrik Lundh
Neil Cerutti wrote:

> For use in a hand-coded parser I wrote the following simple
> iterator with look-ahead. I haven't thought too deeply about what
> peek ought to return when the iterator is exhausted. Suggestions
> are respectfully requested. As it is, you can't be sure what a
> peek() => None signifies until the next iteration unless you
> don't expect None in your sequence.

if you're doing simple parsing on an iterable, it's easier and more efficient
to pass around the current token and the iterator's next method:

http://online.effbot.org/2005_11_01_archive.htm#simple-parser-1

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Working with named groups in re module

2007-01-10 Thread Neil Cerutti
A found some clues on lexing using the re module in Python in an
article by Martin L÷wis.

http://www.python.org/community/sigs/retired/parser-sig/towards-standard/

He writes:
  [...]
  A scanner based on regular expressions is usually implemented
  as an alternative of all token definitions. For XPath, a
  fragment of this expressions looks like this:


  (?P\\d+(\\.\\d*)?|\\.\\d+)|
  (?P\\$""" + QName + """)|
  (?P"""+NCName+""")|
  (?P"""+QName+""")|
  (?P\\()|

  Here, each alternative in the regular expression defines a
  named group. Scanning proceeds in the following steps:

 1. Given the complete input, match the regular expression
 with the beginning of the input.
 2. Find out which alternative matched.
 [...]

Item 2 is where I get stuck. There doesn't seem to be an obvious
way to do it, which I understand is a bad thing in Python.
Whatever source code went with the article originally is not
linked from the above page, so I don't know what Martin did.

Here's what I came up with (with a trivial example regex):

  import re
  r = re.compile('(?Px+)|(?Pa+)')
  m = r.match('aaxaxx')
  if m:
for k in r.groupindex:
  if m.group(k):
# Find the token type.
token = (k, m.group())

I wish I could do something obvious instead, like m.name().

-- 
Neil Cerutti
After finding no qualified candidates for the position of principal, the
school board is pleased to announce the appointment of David Steele to the
post. --Philip Streifer
-- 
http://mail.python.org/mailman/listinfo/python-list


Need startup suggestions for writing a MSA viewer GUI in python

2007-01-10 Thread Joel Hedlund
Hi!

I've been thinking about writing a good multiple sequence alignment 
(MSA) viewer in python. Sort of like ClustalX, only with better zoom and 
pan tools. I've been using python in my work for a couple of years, but 
this is my first shot at making a GUI so I'd very much appreciate some 
ideas from you people to get me going in the right direction. Despite my 
GUI n00b-ness I need to get it good and usable with an intuitive look 
and feel.

What do you think I should do? What packages should I use?

For you non-bioinformatic guys out there, an MSA is basically a big 
matrix (~1000 cols x ~100 rows) of letters, where each row represents a 
biological sequence (gene, protein, etc...). Each sequence has an ID 
that is usually shorter than 40 characters (typically, 8-12). Usually, 
msa visualizers color the letters and their backgrouds according to 
chemical properties.

I want the look and feel to be pretty much like a modern midi sequencer 
(like cubase, nuendo, reason etc...). This means the GUI should have 
least three panes; one to the left to hold the IDs, one in the bottom to 
hold graphs and plots (e.g: user configurable tracks), and the main one 
that holds the actual MSA and occupies most of the space. The left and 
bottom panes should be resizable and foldable.

I would like to be able to zoom and pan x and y axes independently to 
view different portions of the MSA, and the left and bottom panes should 
follow the main pane. I would also like to be able to use drag'n'drop on 
IDs for reordering sequences, and possibly also on the MSA itself to 
shift sequences left and right. Furthermore, I would like to be able to 
select sequences and positions (individually, in ranges or sparsely). I 
would like to have a context sensitive menu on the right mouse button, 
possibly with submenus. Finally, I'd like to be able to export printable 
figures (eps?) of regions and whole MSAs.

I'm thinking all three panes may have to be rendered using some sort of 
scalable graphics because of the coloring and since I'd like to be able 
to zoom freely. I'll also need to draw graphs and plots for the tracks. 
Is pygame good for this, or is there a better way of doing it?

I want my viewer to behave and look like any other program, so I'm 
thinking maybe I should use some standard GUI toolkit instead, say PyQT 
or PyGTK? Would they still allow me to render the MSA nicely?

Does this seem like a humongous project?

Thanks for taking the time!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need startup suggestions for writing a MSA viewer GUI in python

2007-01-10 Thread Chris Mellon
On 1/10/07, Joel Hedlund <[EMAIL PROTECTED]> wrote:
> Hi!
>
> I've been thinking about writing a good multiple sequence alignment
> (MSA) viewer in python. Sort of like ClustalX, only with better zoom and
> pan tools. I've been using python in my work for a couple of years, but
> this is my first shot at making a GUI so I'd very much appreciate some
> ideas from you people to get me going in the right direction. Despite my
> GUI n00b-ness I need to get it good and usable with an intuitive look
> and feel.
>
> What do you think I should do? What packages should I use?
>
> For you non-bioinformatic guys out there, an MSA is basically a big
> matrix (~1000 cols x ~100 rows) of letters, where each row represents a
> biological sequence (gene, protein, etc...). Each sequence has an ID
> that is usually shorter than 40 characters (typically, 8-12). Usually,
> msa visualizers color the letters and their backgrouds according to
> chemical properties.
>
> I want the look and feel to be pretty much like a modern midi sequencer
> (like cubase, nuendo, reason etc...). This means the GUI should have
> least three panes; one to the left to hold the IDs, one in the bottom to
> hold graphs and plots (e.g: user configurable tracks), and the main one
> that holds the actual MSA and occupies most of the space. The left and
> bottom panes should be resizable and foldable.
>
> I would like to be able to zoom and pan x and y axes independently to
> view different portions of the MSA, and the left and bottom panes should
> follow the main pane. I would also like to be able to use drag'n'drop on
> IDs for reordering sequences, and possibly also on the MSA itself to
> shift sequences left and right. Furthermore, I would like to be able to
> select sequences and positions (individually, in ranges or sparsely). I
> would like to have a context sensitive menu on the right mouse button,
> possibly with submenus. Finally, I'd like to be able to export printable
> figures (eps?) of regions and whole MSAs.
>
> I'm thinking all three panes may have to be rendered using some sort of
> scalable graphics because of the coloring and since I'd like to be able
> to zoom freely. I'll also need to draw graphs and plots for the tracks.
> Is pygame good for this, or is there a better way of doing it?
>
> I want my viewer to behave and look like any other program, so I'm
> thinking maybe I should use some standard GUI toolkit instead, say PyQT
> or PyGTK? Would they still allow me to render the MSA nicely?
>
> Does this seem like a humongous project?
>
> Thanks for taking the time!
> /Joel Hedlund

This will probably be a major, but not humongous project. wxPython,
pyGTk, and pyQt all have the architecture and basics you'll need, it
will probably be about the same amount of work to create in all of
them. Pick the one that best suites your licensing and platform needs.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread Neil Cerutti
On 2007-01-10, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
> if you're doing simple parsing on an iterable, it's easier and
> more efficient to pass around the current token and the
> iterator's next method:
>
> http://online.effbot.org/2005_11_01_archive.htm#simple-parser-1

Thank you. Much better.

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Working with named groups in re module

2007-01-10 Thread Fredrik Lundh
Neil Cerutti wrote:

> A found some clues on lexing using the re module in Python in an
> article by Martin L÷wis.

>   Here, each alternative in the regular expression defines a
>   named group. Scanning proceeds in the following steps:
>
>  1. Given the complete input, match the regular expression
>  with the beginning of the input.
>  2. Find out which alternative matched.

you can use lastgroup, or lastindex:

http://effbot.org/zone/xml-scanner.htm

there's also a "hidden" ready-made scanner class inside the SRE module
that works pretty well for simple cases; see:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/457664

 



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Internet Survey

2007-01-10 Thread martin griffith
On Tue, 9 Jan 2007 20:16:01 +, in comp.arch.embedded Pete Fenelon
<[EMAIL PROTECTED]> wrote:

>In comp.arch.embedded [EMAIL PROTECTED] wrote:
>> Hello all,
>> 
>> I represent Octabox, an Internet Start-up developing a wide-scale
>
>Hello. F*ck off, spammer.
>
>
>pete
a bit more subtle
http://www.tagzin.com/main.aspx?cat=v&q=history


martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Using Excel With Python

2007-01-10 Thread liam_jones
I'm very new to Python, well IronPython to precise, and have been
having problems when using Excel.

The problem I'm having is the closing of my Excel object. I'm able to
successfully quit the Excel Application that I create, but when I open
a Workbook in the Application I can't successfully Quit Excel (by this
I mean I can quit it, but the Excel process isn't getting killed and I
have to manually go this through Task Manager).

I've given a sample of code below to hopefully make things clearer.
I've then given all of the information I think might be useful (sorry
if I've gone over the top!).


import System
import clr
from System.Data import *
from System.Web import *
from System.Web.UI import *
from clr import *

clr.AddReference("Microsoft.Office.Interop.Word")
clr.AddReference("Microsoft.Office.Interop.Excel")
clr.AddReference("Microsoft.Office.Interop.PowerPoint")
clr.AddReference("Office")

from Microsoft.Office.Interop.Word import ApplicationClass as
WordApplication
from Microsoft.Office.Interop.Excel import ApplicationClass as
ExcelApplication
from Microsoft.Office.Interop.PowerPoint import ApplicationClass as
PowerPointApplication
from Microsoft.Office.Interop.Word import WdReplace
from Microsoft.Office.Interop.Excel import XlCellType
from Microsoft.Office.Interop.Excel import XlSearchDirection
from System.Type import Missing
from System import GC

missing = Missing
FileLocation = "C:\\test.xls"

ExcelApp = ExcelApplication()
workbook= None

workbook = ExcelApp.Workbooks.Open(FileLocation, missing, missing,
missing, missing, missing, missing, missing, missing, missing, missing,
missing, missing, missing, missing)

workbook.Save()
workbook.Close(SaveChanges=0)

ExcelApp.Quit()

workbook = None
ExcelApp = None

GC.Collect()
GC.WaitForPendingFinalizers()


I've simpilised the code by taking all of my Workbook processing from
it and the problem is still occurring. As I said above if I don't
create the workbook, then ExcelApp closes as expected (there are no
stray processes).

I've read many articles and postings over the last few days regarding
this, but have had no luck with anything I've seen, examples of this
are now given.

I've tried adding the below code, but with no luck.

ExcelApp.ActiveWorkbook.Save()
ExcelApp.ActiveWorkbook.Close(SaveChanges=0)
ExcelApp.Workbooks.Close()

I've also tried the below (again with no luck).

System.Runtime.InteropServices.Marshal.ReleaseComObject(workbook)
System.Runtime.InteropServices.Marshal.ReleaseComObject(excelApp)

And, I've also tried the following (again with no luck!).

del(workbook)
del(excelApp)

It may seem strange using the Garbage Collector (well I wouldn't have
thought about using it here), but it was something that I read about
using for code written in C#. I've tried the code in C# (what I
normally write in) and all works fine, the only real difference is that
I'm setting the objects to NULL in C# and None here in IronPython -
Does this make a difference, is there something else I should be
setting it to? The Workbook object HAS to be set to NULL in C#, it's
then picked up by the Garbage Collector and the task disappears from
the Task Manager.

I did think of killing the actual EXCEL.EXE process at the end of my
code, but there might be several versions of the containing application
running on the same box, so I can't kill all of the Excel processes.

As I said above, I'm sorry if I've gone over the top in my description.
Any ideas or pointers would be greatly appreciated as I'm now going
round in circles.

Thanks in advance.


Rgds
Liam

-- 
http://mail.python.org/mailman/listinfo/python-list


Using Excel With Python

2007-01-10 Thread liam_jones
I'm very new to Python, well IronPython to precise, and have been
having problems when using Excel.

The problem I'm having is the closing of my Excel object. I'm able to
successfully quit the Excel Application that I create, but when I open
a Workbook in the Application I can't successfully Quit Excel (by this
I mean I can quit it, but the Excel process isn't getting killed and I
have to manually go this through Task Manager).

I've given a sample of code below to hopefully make things clearer.
I've then given all of the information I think might be useful (sorry
if I've gone over the top!).


import System
import clr
from System.Data import *
from System.Web import *
from System.Web.UI import *
from clr import *

clr.AddReference("Microsoft.Office.Interop.Word")
clr.AddReference("Microsoft.Office.Interop.Excel")
clr.AddReference("Microsoft.Office.Interop.PowerPoint")
clr.AddReference("Office")

from Microsoft.Office.Interop.Word import ApplicationClass as
WordApplication
from Microsoft.Office.Interop.Excel import ApplicationClass as
ExcelApplication
from Microsoft.Office.Interop.PowerPoint import ApplicationClass as
PowerPointApplication
from Microsoft.Office.Interop.Word import WdReplace
from Microsoft.Office.Interop.Excel import XlCellType
from Microsoft.Office.Interop.Excel import XlSearchDirection
from System.Type import Missing
from System import GC

missing = Missing
FileLocation = "C:\\test.xls"

ExcelApp = ExcelApplication()
workbook= None

workbook = ExcelApp.Workbooks.Open(FileLocation, missing, missing,
missing, missing, missing, missing, missing, missing, missing, missing,
missing, missing, missing, missing)

workbook.Save()
workbook.Close(SaveChanges=0)

ExcelApp.Quit()

workbook = None
ExcelApp = None

GC.Collect()
GC.WaitForPendingFinalizers()


I've simpilised the code by taking all of my Workbook processing from
it and the problem is still occurring. As I said above if I don't
create the workbook, then ExcelApp closes as expected (there are no
stray processes).

I've read many articles and postings over the last few days regarding
this, but have had no luck with anything I've seen, examples of this
are now given.

I've tried adding the below code, but with no luck.

ExcelApp.ActiveWorkbook.Save()
ExcelApp.ActiveWorkbook.Close(SaveChanges=0)
ExcelApp.Workbooks.Close()

I've also tried the below (again with no luck).

System.Runtime.InteropServices.Marshal.ReleaseComObject(workbook)
System.Runtime.InteropServices.Marshal.ReleaseComObject(excelApp)

And, I've also tried the following (again with no luck!).

del(workbook)
del(excelApp)

It may seem strange using the Garbage Collector (well I wouldn't have
thought about using it here), but it was something that I read about
using for code written in C#. I've tried the code in C# (what I
normally write in) and all works fine, the only real difference is that
I'm setting the objects to NULL in C# and None here in IronPython -
Does this make a difference, is there something else I should be
setting it to? The Workbook object HAS to be set to NULL in C#, it's
then picked up by the Garbage Collector and the task disappears from
the Task Manager.

I did think of killing the actual EXCEL.EXE process at the end of my
code, but there might be several versions of the containing application
running on the same box, so I can't kill all of the Excel processes.

As I said above, I'm sorry if I've gone over the top in my description.
Any ideas or pointers would be greatly appreciated as I'm now going
round in circles.

Thanks in advance.


Rgds
Liam

-- 
http://mail.python.org/mailman/listinfo/python-list


Using Excel With Python

2007-01-10 Thread liam_jones
I'm very new to Python, well IronPython to precise, and have been
having problems when using Excel.

The problem I'm having is the closing of my Excel object. I'm able to
successfully quit the Excel Application that I create, but when I open
a Workbook in the Application I can't successfully Quit Excel (by this
I mean I can quit it, but the Excel process isn't getting killed and I
have to manually go this through Task Manager).

I've given a sample of code below to hopefully make things clearer.
I've then given all of the information I think might be useful (sorry
if I've gone over the top!).


import System
import clr
from System.Data import *
from System.Web import *
from System.Web.UI import *
from clr import *

clr.AddReference("Microsoft.Office.Interop.Word")
clr.AddReference("Microsoft.Office.Interop.Excel")
clr.AddReference("Microsoft.Office.Interop.PowerPoint")
clr.AddReference("Office")

from Microsoft.Office.Interop.Word import ApplicationClass as
WordApplication
from Microsoft.Office.Interop.Excel import ApplicationClass as
ExcelApplication
from Microsoft.Office.Interop.PowerPoint import ApplicationClass as
PowerPointApplication
from Microsoft.Office.Interop.Word import WdReplace
from Microsoft.Office.Interop.Excel import XlCellType
from Microsoft.Office.Interop.Excel import XlSearchDirection
from System.Type import Missing
from System import GC

missing = Missing
FileLocation = "C:\\test.xls"

ExcelApp = ExcelApplication()
workbook= None

workbook = ExcelApp.Workbooks.Open(FileLocation, missing, missing,
missing, missing, missing, missing, missing, missing, missing, missing,

missing, missing, missing, missing)

workbook.Save()
workbook.Close(SaveChanges=0)

ExcelApp.Quit()

workbook = None
ExcelApp = None

GC.Collect()
GC.WaitForPendingFinalizers()


I've simpilised the code by taking all of my Workbook processing from
it and the problem is still occurring. As I said above if I don't
create the workbook, then ExcelApp closes as expected (there are no
stray processes).


I've read many articles and postings over the last few days regarding
this, but have had no luck with anything I've seen, examples of this
are now given.

I've tried adding the below code, but with no luck.

ExcelApp.ActiveWorkbook.Save()
ExcelApp.ActiveWorkbook.Close(SaveChanges=0)
ExcelApp.Workbooks.Close()

I've also tried the below (again with no luck).

System.Runtime.InteropServices.Marshal.ReleaseComObject(workbook)
System.Runtime.InteropServices.Marshal.ReleaseComObject(excelApp)

And, I've also tried the following (again with no luck!).

del(workbook)
del(excelApp)

It may seem strange using the Garbage Collector (well I wouldn't have
thought about using it here), but it was something that I read about
using for code written in C#. I've tried the code in C# (what I
normally write in) and all works fine, the only real difference is that
I'm setting the objects to NULL in C# and None here in IronPython -
Does this make a difference, is there something else I should be
setting it to? The Workbook object HAS to be set to NULL in C#, it's
then picked up by the Garbage Collector and the task disappears from
the Task Manager.

I did think of killing the actual EXCEL.EXE process at the end of my
code, but there might be several versions of the containing application
running on the same box, so I can't kill all of the Excel processes.

As I said above, I'm sorry if I've gone over the top in my description.
Any ideas or pointers would be greatly appreciated as I'm now going
round in circles. 

Thanks in advance. 


Rgds 
Liam

-- 
http://mail.python.org/mailman/listinfo/python-list


Building Python 2.5.0 on AIX 5.3 - Undefined symbol: .__floor

2007-01-10 Thread Justin Johnson
Hello,

I'm trying to build Python 2.5.0 on AIX 5.3 using IBM's compiler
(VisualAge C++ Professional / C for AIX Compiler, Version 6).  I run
configure and make, but makes fails with undefined symbols.  See the
output from configure and make below.

svnadm /svn/build/python-2.5.0>env CC=cc CXX=xlC ./configure
--prefix=$base_dir \
> --disable-ipv6 \
> --enable-shared=yes \
> --enable-static=no
checking MACHDEP... aix5
checking EXTRAPLATDIR...
checking for --without-gcc...
checking for gcc... cc_r
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... no
checking whether cc_r accepts -g... yes
checking for cc_r option to accept ANSI C... none needed
checking for --with-cxx-main=... no
checking how to run the C preprocessor... cc_r -E
checking for egrep... grep -E
checking for AIX... yes
checking for --with-suffix...
checking for case-insensitive build directory... no
checking LIBRARY... libpython$(VERSION).a
checking LINKCC... $(srcdir)/Modules/makexp_aix Modules/python.exp .
$(LIBRARY); $(PURIFY) $(MAINCC)
checking for --enable-shared... yes
checking for --enable-profiling...
checking LDLIBRARY... libpython$(VERSION).a
checking for ranlib... ranlib
checking for ar... ar
checking for svnversion... found
checking for a BSD-compatible install... ./install-sh -c
checking for --with-pydebug... no
checking whether cc_r accepts -OPT:Olimit=0... no
checking whether cc_r accepts -Olimit 1500... no
checking whether pthreads are available without options... yes
checking whether xlC also accepts flags for thread support... no
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking asm/types.h usability... no
checking asm/types.h presence... no
checking for asm/types.h... no
checking conio.h usability... no
checking conio.h presence... no
checking for conio.h... no
checking curses.h usability... yes
checking curses.h presence... yes
checking for curses.h... yes
checking direct.h usability... no
checking direct.h presence... no
checking for direct.h... no
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking grp.h usability... yes
checking grp.h presence... yes
checking for grp.h... yes
checking shadow.h usability... no
checking shadow.h presence... no
checking for shadow.h... no
checking io.h usability... no
checking io.h presence... no
checking for io.h... no
checking langinfo.h usability... yes
checking langinfo.h presence... yes
checking for langinfo.h... yes
checking libintl.h usability... no
checking libintl.h presence... no
checking for libintl.h... no
checking ncurses.h usability... no
checking ncurses.h presence... no
checking for ncurses.h... no
checking poll.h usability... yes
checking poll.h presence... yes
checking for poll.h... yes
checking process.h usability... no
checking process.h presence... no
checking for process.h... no
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking signal.h usability... yes
checking signal.h presence... yes
checking for signal.h... yes
checking stropts.h usability... yes
checking stropts.h presence... yes
checking for stropts.h... yes
checking termios.h usability... yes
checking termios.h presence... yes
checking for termios.h... yes
checking thread.h usability... yes
checking thread.h presence... yes
checking for thread.h... yes
checking for unistd.h... (cached) yes
checking utime.h usability... yes
checking utime.h presence... yes
checking for utime.h... yes
checking sys/audioio.h usability... no
checking sys/audioio.h presence... no
checking for sys/audioio.h... no
checking sys/bsdtty.h usability... no
checking sys/bsdtty.h presence... no
checking for sys/bsdtty.h... no
checking sys/file.h usability... yes
checking sys/file.h presence... yes
checking for sys/file.h... yes
checking sys/loadavg.h usability... no
checking sys/loadavg.h presence... no
checking for sys/loadavg.h... no
checking sys/lock.h usability... yes
checking sys/lock.h presence... yes
checking for sys/lock.h... yes
checking sys/mkdev.h usability... no
checking sys/mkdev.h presence... no
checking for sys/mkdev.h... no
checking sys/modem.h usability... no
checking sys/modem.h presence... no
checking for sys/modem.h... no
checking sys/param.h usability... yes
checking sys/param.h presence... yes
checking for sys/param.h... yes
c

Re: Internet Survey

2007-01-10 Thread Lefty Bigfoot
On Wed, 10 Jan 2007 08:28:33 -0600, [EMAIL PROTECTED] wrote
(in article <[EMAIL PROTECTED]>):

> In article <[EMAIL PROTECTED]>,
>"Elan Magavi" <[EMAIL PROTECTED]> wrote:
>> Is that like.. OctaPussy?
> 
> I didn't read their stuff.  Are they really trying to put a 
> round peg in a square hole?

Sounds more like an octagonal pole in a round hole.


-- 
Lefty
All of God's creatures have a place..
.right next to the potatoes and gravy.
See also: http://www.gizmodo.com/gadgets/images/iProduct.gif

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (newbie) Is there a way to prevent "name redundancy" in OOP ?

2007-01-10 Thread Martin Miller
Carl Banks wrote:
> Martin Miller wrote:
> > Carl Banks wrote:
> >
> > > Martin Miller wrote:
> > > > ### non-redundant example ###
> > > > import sys
> > > >
> > > > class Pin:
> > > > def __init__(self, name, namespace=None):
> > > > self.name = name
> > > > if namespace == None:
> > > > # default to caller's globals
> > > > namespace = sys._getframe(1).f_globals
> > > > namespace[name] = self
> > > >
> > > > Pin('aap')  # create a Pin object named 'aap'
> > > > Pin('aap2') # create a Pin object named 'aap2'
> > > > print aap.name
> > > > print aap2.name
> > >
> > > The problem with this is that it only works for global namespaces,
> > > while failing silently and subtly if used in a local namespace:
> >
> > Oh, contrair. It would work fine with local namespaces simply by
> > overriding the default value of the optional 'namepace' parameter (see
> > below).
>
> Did you try it?

Yes, but I misinterpreted the results which seemed to support my
claim. Therefore I must retract what I wrote and now have to agree
with what you said about it not working in a local namespace --
specifically in the sense that it is unable to bind the instance the
name in the caller's local namespace.

I'm not sure that this is a critical flaw in the sense that it may
not matter for some usages. For example I've seen it used to define
(yet another) Enum class which facilitated the creation of names
bound to a range or sequence of values. The fact that these couldn't
be defined local to code block wasn't apparently a big issue.


> > > def fun():
> > > Pin('aap')
> > > aap1 = aap
> > > fun2()
> > > aap2 = aap
> > > print aap1 is aap2
> > >
> > > def fun2():
> > > Pin('aap')
> > >
> > > If it's your deliberate intention to do it with the global namespace,
> > > you might as well just use globals() and do it explicitly, rather than
> > > mucking around with Python frame internals.  (And it doesn't make the
> > > class unusable for more straightforward uses.)
> >
> > You could be more explicit by just passing 'globals()' as a second
> > parameter to the __init__ constructor (which is unnecessary, since
> > that's effectively the default).
> >
> > It's not clear to me how the example provided shows the technique
> > "failing silently and subtly if used in a local namespace" because what
> > happens is exactly what I would expect if the constructor is called
> > twice with the same string and defaulted namespace -- namely create
> > another object and make the existing name refer to it. If one didn't
> > want the call to Pin in fun2 to do this, just change fun2 to this:
>
> Because the usage deceptively suggests that it defines a name in the
> local namespace.  Failing may be too strong a word, but I've come to
> expect a consistent behavior w.r.t. namespaces, which this violates, so
> I think it qualifies as a failure.

I don't see how the usage deceptively suggests this at all. In this
case -- your sample code for fun() and fun2() -- all were simply
Pin('aap'). Since no additional namespace argument was supplied, the
same name was bound in the defaulted global namespace each time but
to different objects. In other words the 'print aap1 is aap2'
statement produced 'false' because the call to fun2() changed the
(global) object to which 'aap' was previously bound.

>
> > def fun2():
> > Pin('aap', locals())
>
> Did you actually try this?  ...

As I said, yes, I did, and the addition of the 'locals()' parameter
does make the 'print aap1 is aap2' statement in fun() output 'true'.
This lead me to take for granted that it had bound the name in the
local namespace. However this assumption was incorrect, but that
wasn't obvious since there were no further statements in fun2().

The problem is that there fundamentally doesn't seem to be a way to
create local variables except directly by using an assignment
statement within the block of a function or method. Modifying the
mapping returned from locals() does not accomplish this -- normally
anyway, although interestingly, it will currently if there's an exec
statement anywhere in the function, even a dummy one, but this is
not a documented feature and from what I've read just side-effect of
the way code optimization is be done to support the exec statement.


> ...  Better yet, try this function.  Make sure
> aap isn't defined as a global:
>
> def fun3():
> Pin('aap',locals())
> print aap
>
> (I get NameError.)

I do, too, if I run it by itself or first unbind the global left over
from
fun() with a 'del aap' statement.


> > This way the "print aap1 is aap2" statement in fun() would output
> > "true" since the nested call to Pin would now (explicitly) cause the
> > name of the new object to be put into fun2's local namespace leaving
> > the global one created by the call in fun() alone.
>
> Did you try it?

Yes -- explained above.


> > Obviously, this was not an objection I anticipated. An important on

Re: Need startup suggestions for writing a MSA viewer GUI in python

2007-01-10 Thread hg
Joel Hedlund wrote:

> Hi!
> 
> I've been thinking about writing a good multiple sequence alignment
> (MSA) viewer in python. Sort of like ClustalX, only with better zoom and
> pan tools. I've been using python in my work for a couple of years, but
> this is my first shot at making a GUI so I'd very much appreciate some
> ideas from you people to get me going in the right direction. Despite my
> GUI n00b-ness I need to get it good and usable with an intuitive look
> and feel.
> 
> What do you think I should do? What packages should I use?
> 
> For you non-bioinformatic guys out there, an MSA is basically a big
> matrix (~1000 cols x ~100 rows) of letters, where each row represents a
> biological sequence (gene, protein, etc...). Each sequence has an ID
> that is usually shorter than 40 characters (typically, 8-12). Usually,
> msa visualizers color the letters and their backgrouds according to
> chemical properties.
> 
> I want the look and feel to be pretty much like a modern midi sequencer
> (like cubase, nuendo, reason etc...). This means the GUI should have
> least three panes; one to the left to hold the IDs, one in the bottom to
> hold graphs and plots (e.g: user configurable tracks), and the main one
> that holds the actual MSA and occupies most of the space. The left and
> bottom panes should be resizable and foldable.
> 
> I would like to be able to zoom and pan x and y axes independently to
> view different portions of the MSA, and the left and bottom panes should
> follow the main pane. I would also like to be able to use drag'n'drop on
> IDs for reordering sequences, and possibly also on the MSA itself to
> shift sequences left and right. Furthermore, I would like to be able to
> select sequences and positions (individually, in ranges or sparsely). I
> would like to have a context sensitive menu on the right mouse button,
> possibly with submenus. Finally, I'd like to be able to export printable
> figures (eps?) of regions and whole MSAs.
> 
> I'm thinking all three panes may have to be rendered using some sort of
> scalable graphics because of the coloring and since I'd like to be able
> to zoom freely. I'll also need to draw graphs and plots for the tracks.
> Is pygame good for this, or is there a better way of doing it?
> 
> I want my viewer to behave and look like any other program, so I'm
> thinking maybe I should use some standard GUI toolkit instead, say PyQT
> or PyGTK? Would they still allow me to render the MSA nicely?
> 
> Does this seem like a humongous project?
> 
> Thanks for taking the time!
> /Joel Hedlund

I do not know if PyGtk and PyQT have demos, but wxPython does and includes
PyPlot: an easy way to look at the basic features.

hg


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yield

2007-01-10 Thread Mikael Olofsson
Mathias Panzenboeck wrote:
> def primes():
>   yield 1
>   yield 2
>   [snip rest of code]
>   

Hmm... 1 is not a prime. See for instance

http://en.wikipedia.org/wiki/Prime_number

The definition given there is "In mathematics , a 
*prime number* (or a *prime*) is a natural number  
that has exactly two (distinct) natural number divisors 
." The important part of the statement is "exactly 
two...divisors", which rules out the number 1.

/MiO
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yield

2007-01-10 Thread Mikael Olofsson


I wrote:
> The definition given there is "In mathematics , a 
> *prime number* (or a *prime*) is a natural number 
>  that has exactly two (distinct) natural number 
> divisors ." The important part of the statement is 
> "exactly two...divisors", which rules out the number 1.

Or should I say: Thunderbird made me write:... Those freakin 
 and  was not visible before I 
posted the thing. &%#&%#

/MiO
-- 
http://mail.python.org/mailman/listinfo/python-list


call graph using python and cscope

2007-01-10 Thread Roland Puntaier
""" 
Sometimes it is nice to have the data used by cscope accessible in a 
programatic way.
The following python script extract the "functions called" information 
from cscope (function: callGraph)
and produced an html file from them.

  from csCallGraph import *
  acg=callGraph(entryFun,workingDir)

entryFun is the function to start with (e.g. main)
workingDir is the directory where cscope.out is located

As a script it can be called like:
  csCallGraph main > myprogram.html
"""

import subprocess , os, sys

def functionsCalled(entryFun,workingDir):
  cmd = "cscope -d -l -L -2%s"%entryFun
  process = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, 
cwd=workingDir) 
  csoutput= process.stdout.read() 
  del process
  cslines=[arr.strip().split(' ') for arr in csoutput.split('\n') if 
len(arr.split(' '))>1]
  funsCalled={}
  for fl in cslines:
if funsCalled.has_key(fl[0]):
  funsCalled[fl[0]]|=set([fl[1]])
else:
  funsCalled[fl[0]]=set([fl[1]])
  allFuns=set(map(lambda x:x[1],cslines))
  return (allFuns,funsCalled)

def callGraph(entryFun,workingDir,cg={}):
  if not cg.has_key(entryFun): 
allFuns,funsCalled=functionsCalled(entryFun,workingDir)
cg[entryFun]=funsCalled
for af in allFuns:
  cg=callGraph(af,workingDir,cg)
  return cg

def textCallGraph(acg):
  innerFuns=[(f,d,len(reduce(lambda x,y:x|y,d.values( for f,d in 
acg.items() if len(d)>0 ]
  leafFuns=[(f,d,0) for f,d in acg.items() if not len(d)>0 ]
  innerFuns.sort(lambda x,y: y[2]-x[2])
  innerLen=len(innerFuns)
  leafLen=len(leafFuns)
  title=lambda aFun: '\n' + aFun + '\n' + '-'*len(aFun)
  def ff(aFun,funsCalled):
fileFuns=zip(funsCalled.keys(),[''+',\n '.join(funsCalledInFile) 
for funsCalledInFile in funsCalled.values()])
funIn=lambda f: '\n%s in '%f
return title(aFun) + funIn(aFun) + funIn(aFun).join(map(lambda 
x:'%s:\n%s'%(x[0],x[1]),fileFuns))
  strInner='\n'.join([ff(f[0],f[1]) for f in innerFuns])
  strLeaves='\n'.join(map(lambda x:title(x[0]),leafFuns))
  return strInner+'\n'+strLeaves

def funWeights(acg):
  funWeights=dict([(f,reduce(lambda x,y:x|y,d.values())) for f,d in 
acg.items() if len(d)>0 ]+
  [(f,[]) for f,d in acg.items() if not len(d)>0 ])
  weights={}
  def calcWeights(af):
if not weights.has_key(af):
  subFuns=funWeights[af]
  weights[af]=1
  for f in subFuns:
calcWeights(f)
weights[af]+=weights[f]
  for af in funWeights.keys(): calcWeights(af)
  return weights

def htmlCallGraph(acg):
  funW=funWeights(acg)
  innerFuns=[(f,d,funW[f]) for f,d in acg.items() if len(d)>0 ]
  leafFuns=[(f,d,0) for f,d in acg.items() if not len(d)>0 ]
  #innerFuns.sort(lambda x,y: y[2]-x[2]))
  def cfun(a,b):
if b > a:
  return 1
elif b < a:
  return -1
return 0
  innerFuns.sort(lambda x,y: cfun(x[2],y[2]))
  innerLen=len(innerFuns)
  leafLen=len(leafFuns)
  funDict=dict(zip(map(lambda x:x[0],innerFuns)+map(lambda 
x:x[0],leafFuns),range(innerLen+leafLen)))
  title=lambda aFun: '' + aFun + ' 
(%i)'%funW[aFun] + '\n'
  def ff(aFun,funsCalled):
fun=lambda y:''+y+''
fileFuns=zip(funsCalled.keys(),[',\n'.join(map(fun,funsCalledInFile)) 
for funsCalledInFile in funsCalled.values()])
funIn=lambda f: '%s in '%f
return title(aFun) + funIn(aFun) + funIn(aFun).join(map(lambda 
x:'%s:\n%s'%(x[0],x[1]),fileFuns))
  strInner='\n'.join([ff(f[0],f[1]) for f in innerFuns])
  strLeaves='\n'.join(map(lambda x:title(x[0]),leafFuns))
  return '\n\n'+strInner+'\n'+strLeaves+"\n\n"

if __name__ == '__main__':
  if len(sys.argv) < 2:
  print 'Usage: csGragh.py entryFunction'
  sys.exit()
  entryFun=sys.argv[1]
  workingDir=os.getcwd()
  acg=callGraph(entryFun,workingDir)
  print htmlCallGraph(acg)
-- 
http://mail.python.org/mailman/listinfo/python-list


SubProcess _make_inheritable

2007-01-10 Thread Roland Puntaier
SubProcess.py needs to be patched - at least in 2.4 -  to work from 
windows GUIs:

def _make_inheritable(self, handle):
"""Return a duplicate of handle, which is inheritable"""
if handle==None: handle=-1
return DuplicateHandle(GetCurrentProcess(), handle,
   GetCurrentProcess(), 0, 1,
   DUPLICATE_SAME_ACCESS)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Execute binary code

2007-01-10 Thread sturlamolden

Chris Mellon wrote:

> This works fine if the binary data is "pure" asm, but the impresssion
> the OP gave is that it's a compiled binary, which you can't just "jump
> into" this way.

You may have to offset the function pointer so the entry point becomes
correct.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: maximum number of threads

2007-01-10 Thread Jeremy Sanders
Jean-Paul Calderone wrote:

> Indeed you are correct.  The actual limit you are hitting is the size
> of your address space.  Each thread is allocated 8MB of stack.  382
> threads consumes about 3GB of address space.  Even though most of this
> memory isn't actually allocated, the address space is still used up.  So,
> when you try to create the 383rd thread, the kernel can't find anyplace
> to put its stack.  So you can't create it.

Interesting. That's why I can get over 3000 on my x86-64 machine... Much
more address space.

-- 
Jeremy Sanders
http://www.jeremysanders.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Working with named groups in re module

2007-01-10 Thread Neil Cerutti
On 2007-01-10, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
> Neil Cerutti wrote:
>> A found some clues on lexing using the re module in Python in
>> an article by Martin L÷wis.
>
>>   Here, each alternative in the regular expression defines a
>>   named group. Scanning proceeds in the following steps:
>>
>>  1. Given the complete input, match the regular expression
>>  with the beginning of the input.
>>  2. Find out which alternative matched.
>
> you can use lastgroup, or lastindex:
>
> http://effbot.org/zone/xml-scanner.htm
>
> there's also a "hidden" ready-made scanner class inside the SRE
> module that works pretty well for simple cases; see:
>
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/457664

Thanks for the excellent pointers.

I got tripped up:

>>> m = re.match('(a+(b*)a+)', 'aaa')
>>> dir(m)
['__copy__', '__deepcopy__', 'end', 'expand', 'group', 'groupdict', 'groups', 
'span', 'start']

There are some notable omissions there. That's not much of an
excuse for my not understanding the handy docs, but I guess it
can can function as a warning against relying on the interactive
help.

I'd seen the lastgroup definition in the documentation, but I
realize it was exactly what I needed. I didn't think carefully
enough about what "last matched capturing group" actually meant,
given my regex. I don't think I saw "name" there either. ;-)

  lastgroup 
  
  The name of the last matched capturing group, or None if the
  group didn't have a name, or if no group was matched at all. 

-- 
Neil Cerutti
We dispense with accuracy --sign at New York drug store
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need startup suggestions for writing a MSA viewer GUI in python

2007-01-10 Thread Neil Cerutti
On 2007-01-10, hg <[EMAIL PROTECTED]> wrote:
> Joel Hedlund wrote:
>> Thanks for taking the time!
>> /Joel Hedlund
>
> I do not know if PyGtk and PyQT have demos, but wxPython does
> and includes PyPlot: an easy way to look at the basic features.

PyQT does come with an impressive plethora of demos.

-- 
Neil Cerutti
The concert held in Fellowship Hall was a great success. Special thanks are
due to the minister's daughter, who labored the whole evening at the piano,
which as usual fell upon her. --Church Bulletin Blooper
-- 
http://mail.python.org/mailman/listinfo/python-list


Regex Question

2007-01-10 Thread Bill Mill
Hello all,

I've got a test script:

 start python code =

tests2 = ["item1: alpha; item2: beta. item3 - gamma--",
"item1: alpha; item3 - gamma--"]

def test_re(regex):
r = re.compile(regex, re.MULTILINE)
for test in tests2:
res = r.search(test)
if res:
print res.groups()
else:
print "Failed"

 end python code 

And a simple question:

Why does the first regex that follows successfully grab "beta", while
the second one doesn't?

In [131]: test_re(r"(?:item2: (.*?)\.)")
('beta',)
Failed

In [132]: test_re(r"(?:item2: (.*?)\.)?")
(None,)
(None,)

Shouldn't the '?' greedily grab the group match?

Thanks
Bill Mill
bill.mill at gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Internet Survey

2007-01-10 Thread jmfbahciv
In article <[EMAIL PROTECTED]>,
   Lefty Bigfoot <[EMAIL PROTECTED]> wrote:
>On Wed, 10 Jan 2007 08:28:33 -0600, [EMAIL PROTECTED] wrote
>(in article <[EMAIL PROTECTED]>):
>
>> In article <[EMAIL PROTECTED]>,
>>"Elan Magavi" <[EMAIL PROTECTED]> wrote:
>>> Is that like.. OctaPussy?
>> 
>> I didn't read their stuff.  Are they really trying to put a 
>> round peg in a square hole?
>
>Sounds more like an octagonal pole in a round hole.

Nah, I figured they picked the word octal because it's never
been used before.

/BAH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Execute binary code

2007-01-10 Thread Chris Mellon
On 10 Jan 2007 08:12:41 -0800, sturlamolden <[EMAIL PROTECTED]> wrote:
>
> Chris Mellon wrote:
>
> > This works fine if the binary data is "pure" asm, but the impresssion
> > the OP gave is that it's a compiled binary, which you can't just "jump
> > into" this way.
>
> You may have to offset the function pointer so the entry point becomes
> correct.
>

That won't be enough. You basically would have to re-implement the OS
loading process, handling relocations and loading any linked
libraries. Possible, in theory, but very non-trivial.

> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread George Sakkis
Neil Cerutti wrote:

> For use in a hand-coded parser I wrote the following simple
> iterator with look-ahead. I haven't thought too deeply about what
> peek ought to return when the iterator is exhausted. Suggestions
> are respectfully requested. As it is, you can't be sure what a
> peek() => None signifies until the next iteration unless you
> don't expect None in your sequence.

There is a different implementation in the Cookbook already:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/304373

George

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread Steven Bethard
Neil Cerutti wrote:
> For use in a hand-coded parser I wrote the following simple
> iterator with look-ahead.

There's a recipe for this:

   http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/304373

Note that the recipe efficiently supports an arbitrary look-ahead, not 
just a single item.

> I haven't thought too deeply about what peek ought to return
> when the iterator is exhausted. Suggestions are respectfully
> requested.

In the recipe, StopIteration is still raised on a peek() operation that 
tries to look past the end of the iterator.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread sturlamolden

robert wrote:

> Thats true. IPC through sockets or (somewhat faster) shared memory -  cPickle 
> at least - is usually the maximum of such approaches.
> See 
> http://groups.google.de/group/comp.lang.python/browse_frm/thread/f822ec289f30b26a
>
> For tasks really requiring threading one can consider IronPython.
> Most advanced technique I've see for CPython ist posh : 
> http://poshmodule.sourceforge.net/


In SciPy there is an MPI-binding project, mpi4py.

MPI is becoming the de facto standard for high-performance parallel
computing, both on shared memory systems (SMPs) and clusters. Spawning
threads or processes is not recommended way to do numerical parallel
computing. Threading makes programming certain tasks more convinient
(particularly GUI and I/O, for which the GIL does not matter anyway),
but is not a good paradigm for dividing CPU bound computations between
multiple processors. MPI is a high level API based on a concept of
"message passing", which allows the programmer to focus on solving the
problem, instead on irrelevant distractions such as thread managament
and synchronization.

Although MPI has standard APIs for C and Fortran, it may be used with
any programming language. For Python, an additional advantage of using
MPI is that the GIL has no practical consequence for performance. The
GIL can lock a process but not prevent MPI from using multiple
processors as MPI is always using multiple processes. For IPC, MPI will
e.g. use shared-memory segments on SMPs and tcp/ip on clusters, but all
these details are hidden.

It seems like 'ppsmp' of parallelpython.com is just an reinvention of a
small portion of MPI.


http://mpi4py.scipy.org/
http://en.wikipedia.org/wiki/Message_Passing_Interface

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread sturlamolden

[EMAIL PROTECTED] wrote:

>That's right. ppsmp starts multiple interpreters in separate
> processes and organize communication between them through IPC.

Thus you are basically reinventing MPI.


http://mpi4py.scipy.org/
http://en.wikipedia.org/wiki/Message_Passing_Interface

-- 
http://mail.python.org/mailman/listinfo/python-list


Universal Feed Parser - How do I keep attributes?

2007-01-10 Thread [EMAIL PROTECTED]
I'm trying to use FeedParser to parse out Yahoo's Weather Data. I need
to capture some attribute values, but it looks like FeedParser strips
them out. Is there any way to keep them?

XML Snippet:
...

...

When I try to get the value, it's empty:

>>> d = feedparser.parse('http://weather.yahooapis.com/forecastrss?p=94089')
>>> d.feed.yweather_location
u''

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"sturlamolden" <[EMAIL PROTECTED]> writes:
|> 
|> MPI is becoming the de facto standard for high-performance parallel
|> computing, both on shared memory systems (SMPs) and clusters.

It has been for some time, and is still gaining ground.

|> Spawning
|> threads or processes is not recommended way to do numerical parallel
|> computing.

Er, MPI works by getting SOMETHING to spawn processes, which then
communicate with each other.

|> Threading makes programming certain tasks more convinient
|> (particularly GUI and I/O, for which the GIL does not matter anyway),
|> but is not a good paradigm for dividing CPU bound computations between
|> multiple processors. MPI is a high level API based on a concept of
|> "message passing", which allows the programmer to focus on solving the
|> problem, instead on irrelevant distractions such as thread managament
|> and synchronization.

Grrk.  That's not quite it.

The problem is that the current threading models (POSIX threads and
Microsoft's equivalent) were intended for running large numbers of
semi-independent, mostly idle, threads: Web servers and similar.
Everything about them, including their design (such as it is), their
interfaces and their implementations, are unsuitable for parallel HPC
applications.  One can argue whether that is insoluble, but let's not,
at least not here.

Now, Unix and Microsoft processes are little better but, because they
are more separate (and, especially, because they don't share memory)
are MUCH easier to run effectively on shared memory multi-CPU systems.
You still have to play administrator tricks, but they aren't as foul
as the ones that you have to play for threaded programs.  Yes, I know
that it is a bit Irish for the best way to use a shared memory system
to be to not share memory, but that's how it is.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about compiling.

2007-01-10 Thread Bjoern Schliessmann
Steven W. Orr wrote:

> I *just* read the tutorial so please be gentle. I created a file
> called fib.py which works very nicely thank you. When I run it it
> does what it's supposed to do but I do not get a resulting .pyc
> file. 

.pyc files are created only if you import a .py file.

Regards,


Björn

-- 
BOFH excuse #77:

Typo in the code

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread Neil Cerutti
On 2007-01-10, Steven Bethard <[EMAIL PROTECTED]> wrote:
> Neil Cerutti wrote:
>> For use in a hand-coded parser I wrote the following simple
>> iterator with look-ahead.
>
> There's a recipe for this:
>
>http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/304373
>
> Note that the recipe efficiently supports an arbitrary
> look-ahead, not just a single item.
>
>> I haven't thought too deeply about what peek ought to return
>> when the iterator is exhausted. Suggestions are respectfully
>> requested.
>
> In the recipe, StopIteration is still raised on a peek()
> operation that tries to look past the end of the iterator.

That was all I could think of as an alternative, but that makes
it fairly inconvenient to use. I guess another idea might be to
allow user to provide a "no peek" return value in the
constructor, if they so wish.

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread sturlamolden

Nick Maclaren wrote:

> as the ones that you have to play for threaded programs.  Yes, I know
> that it is a bit Irish for the best way to use a shared memory system
> to be to not share memory, but that's how it is.

Thank you for clearing that up.

In any case, this means that Python can happily keep its GIL, as the
CPU bound 'HPC' tasks for which the GIL does matter should be done
using multiple processes (not threads) anyway. That leaves threads as a
tool for programming certain i/o tasks and maintaining 'responsive'
user interfaces, for which the GIL incidentally does not matter.

I wonder if too much emphasis is put on thread programming these days.
Threads may be nice for programming web servers and the like, but not
for numerical computing. Reading books about thread programming, one
can easily get the impression that it is 'the' way to parallelize
numerical tasks on computers with multiple CPUs (or multiple CPU
cores). But if threads are inherently designed and implemented to stay
idle most of the time, that is obviously not the case.

I like MPI. Although it is a huge API with lots of esoteric functions,
I only need to know a handfull to cover my needs. Not to mention the
fact that I can use MPI with Fortran, which is frowned upon by computer
scientists but loved by scientists and engineers specialized in any
other field.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Building Python 2.5.0 on AIX 5.3 - Undefined symbol: .__floor

2007-01-10 Thread Justin Johnson
It looks like I just need to upgrade my compiler version.  See
http://www-128.ibm.com/developerworks/forums/dw_thread.jsp?message=13876484&cat=72&thread=124105&treeDisplayType=threadmode1&forum=905#13876484
for more information.

Justin Johnson wrote:
> Hello,
>
> I'm trying to build Python 2.5.0 on AIX 5.3 using IBM's compiler
> (VisualAge C++ Professional / C for AIX Compiler, Version 6).  I run
> configure and make, but makes fails with undefined symbols.  See the
> output from configure and make below.
>
> svnadm /svn/build/python-2.5.0>env CC=cc CXX=xlC ./configure
> --prefix=$base_dir \
> > --disable-ipv6 \
> > --enable-shared=yes \
> > --enable-static=no
> checking MACHDEP... aix5
> checking EXTRAPLATDIR...
> checking for --without-gcc...
> checking for gcc... cc_r
> checking for C compiler default output file name... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... no
> checking whether cc_r accepts -g... yes
> checking for cc_r option to accept ANSI C... none needed
> checking for --with-cxx-main=... no
> checking how to run the C preprocessor... cc_r -E
> checking for egrep... grep -E
> checking for AIX... yes
> checking for --with-suffix...
> checking for case-insensitive build directory... no
> checking LIBRARY... libpython$(VERSION).a
> checking LINKCC... $(srcdir)/Modules/makexp_aix Modules/python.exp .
> $(LIBRARY); $(PURIFY) $(MAINCC)
> checking for --enable-shared... yes
> checking for --enable-profiling...
> checking LDLIBRARY... libpython$(VERSION).a
> checking for ranlib... ranlib
> checking for ar... ar
> checking for svnversion... found
> checking for a BSD-compatible install... ./install-sh -c
> checking for --with-pydebug... no
> checking whether cc_r accepts -OPT:Olimit=0... no
> checking whether cc_r accepts -Olimit 1500... no
> checking whether pthreads are available without options... yes
> checking whether xlC also accepts flags for thread support... no
> checking for ANSI C header files... yes
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
> checking asm/types.h usability... no
> checking asm/types.h presence... no
> checking for asm/types.h... no
> checking conio.h usability... no
> checking conio.h presence... no
> checking for conio.h... no
> checking curses.h usability... yes
> checking curses.h presence... yes
> checking for curses.h... yes
> checking direct.h usability... no
> checking direct.h presence... no
> checking for direct.h... no
> checking dlfcn.h usability... yes
> checking dlfcn.h presence... yes
> checking for dlfcn.h... yes
> checking errno.h usability... yes
> checking errno.h presence... yes
> checking for errno.h... yes
> checking fcntl.h usability... yes
> checking fcntl.h presence... yes
> checking for fcntl.h... yes
> checking grp.h usability... yes
> checking grp.h presence... yes
> checking for grp.h... yes
> checking shadow.h usability... no
> checking shadow.h presence... no
> checking for shadow.h... no
> checking io.h usability... no
> checking io.h presence... no
> checking for io.h... no
> checking langinfo.h usability... yes
> checking langinfo.h presence... yes
> checking for langinfo.h... yes
> checking libintl.h usability... no
> checking libintl.h presence... no
> checking for libintl.h... no
> checking ncurses.h usability... no
> checking ncurses.h presence... no
> checking for ncurses.h... no
> checking poll.h usability... yes
> checking poll.h presence... yes
> checking for poll.h... yes
> checking process.h usability... no
> checking process.h presence... no
> checking for process.h... no
> checking pthread.h usability... yes
> checking pthread.h presence... yes
> checking for pthread.h... yes
> checking signal.h usability... yes
> checking signal.h presence... yes
> checking for signal.h... yes
> checking stropts.h usability... yes
> checking stropts.h presence... yes
> checking for stropts.h... yes
> checking termios.h usability... yes
> checking termios.h presence... yes
> checking for termios.h... yes
> checking thread.h usability... yes
> checking thread.h presence... yes
> checking for thread.h... yes
> checking for unistd.h... (cached) yes
> checking utime.h usability... yes
> checking utime.h presence... yes
> checking for utime.h... yes
> checking sys/audioio.h usability... no
> checking sys/audioio.h presence... no
> checking for sys/audioio.h... no
> checking sys/bsdtty.h usability... no
> checking sys/bsdtty.h presence... no
> checking for sys/bsdtty.h... no
> checking sys/file.h usability... yes
> checking sys/file.h presence... yes
> checking for sys/file.h... yes
> checking sys/loadavg.h

Re: Announcement -- ZestyParser

2007-01-10 Thread Terry Reedy

"Adam Atlas" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
| This has been on Cheese Shop for a few weeks now, being updated now and
| then, but I never really announced it. I just now put up a real web
| page for it, so I thought I'd take the opportunity to mention it here.
|
| ZestyParser is my attempt at a flexible toolkit for creating concise,
| precise, and Pythonic parsers. None of the existing packages really met
| my needs; Pyparsing came the closest, but I find it still has some
| shortcomings. ZestyParser is simply an abstract implementation of how I
| think about parsing; I don't expect it'll meet everyone's parsing
| needs, but I hope it'll make typical parsing tasks more joyful.
|
| Here's the web page:
| http://adamatlas.org/2006/12/ZestyParser/
| Here's the Cheese Shop page:
| http://cheeseshop.python.org/pypi/ZestyParser
| (I recommend you get the source package, as it includes the examples.)

I would like to look at the examples online (along with the doc file)
before downloading this and setuptools and installing, so I can see what
it actually looks like in use.

tjr



-- 
http://mail.python.org/mailman/listinfo/python-list


Seattle Python Interest Group Thursday at 7:00 PM

2007-01-10 Thread James Thiele
7pm at the bar below Third Place books in Ravenna:
http://www.seapig.org/ThirdPlaceMeetingLocation

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Paul Rubin
[EMAIL PROTECTED] (Nick Maclaren) writes:
> Yes, I know that it is a bit Irish for the best way to use a shared
> memory system to be to not share memory, but that's how it is.

But I thought serious MPI implementations use shared memory if they
can.  That's the beauty of it, you can run your application on SMP
processors getting the benefit of shared memory, or split it across
multiple machines using ethernet or infiniband or whatever, without
having to change the app code.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to protect a piece of critical code?

2007-01-10 Thread Steve Holden
Hendrik van Rooyen wrote:
> Hi,
> 
> I would like to do the following as one atomic operation:
> 
> 1) Append an item to a list
> 2) Set a Boolean indicator
> 
> It would be almost like getting and holding the GIL,
> to prevent a thread swap out between the two operations.
> - sort of the inverted function than for which the GIL
> seems to be used, which looks like "let go", get control
> back via return from blocking I/O, and then "re - acquire"
> 
> Is this "reversed" usage possible?
> Is there some way to prevent thread swapping?
> 
This seems to me to be a typical example of putting the cart before the 
horse. Therefore, please don't think that what follows is directed 
specifically at you: it's directed at everybody who thinks that their 
problem is something other than it really is (of course, my extensive 
experience on c.l.py plus my well-known psychic powers uniquely qualify 
me to explain to you that you don't understand your own problem).

> The question arises in the context of a multi threaded
> environment where the list is used as a single producer,
> single consumer queue - I can solve my problem in various
> ways, of which this is one, and I am curious as to if it is 
> possible to prevent a thread swap from inside the thread.
> 
Of course you will know what they say about curiosity [1].

You don't say what the Boolean indicator is for. My natural inclination 
is to assume it's to say whether there's anything in the list. The 
Twisted crew can tell you this is a terrible mistake,. What you should 
really do is define a function that waits until there is something to 
put on the list and then returns a deferred that will eventually 
indicate whether the insertion was successful [2].

But your *actual* problem appears to be the introduction of critical 
sections into your program, a question about which computer scientists 
have written for over forty years now, albeit in the guise of 
discussions about how to get a good meal [3].

I could go on, but I am realising as I write that less and less of this 
is really relevant to you. In short, please don't try to reinvent the 
wheel when there are wheelwrights all around and a shop selling spare 
wheels just around the corner. Python is already replete with ways to 
implement critical sections and thread-safe queuing mechanisms [5].

I could, of course, say

http://www.justfuckinggoogleit.com/search.pl?query=python+atomic+operation

but that would seem rude, which is against the tradition of c.l.py. 
Besides which the answers aren't necessarily as helpful as what's been 
posted on this thread, so I'll content myself with saying that one's can 
often be better spent R'ing TFM than posting on this newsgroup, but that 
while Google /may/ be your friend it's not as good a friend as this 
newsgroup.

If it isn't obvious that this post was meant more to amuse regular 
readers than inform and/or chastise someone who isn't (yet) one then 
please accept my apologies. Fortunately I don't normally go on like this 
more than once a year, so now it's hey ho for 2008 [6].

If you have been, thank you for reading. If you haven't then I guess you 
won't be reading this either. Have a nice day. And a Happy New Year to 
all my readers.

regards
  Steve

[1]: It killed the cat, thereby letting MSDOS's "type" command get a 
toehold and leaving people to fight about whether "less" really was 
"more" or not.

[2]: Of course I'm joking. But it seems that Twisted's detractors don't 
have much of a sense of humour, so it's much more interesting to poke 
fun at Twisted, which at least has the merit of having been designed by 
real human beings with brains and everything. Though I have never seen 
any of them take a beer. [4].

[3] The UNIX fork() system call is actually named in honour of Edsger 
Dijkstra's discussions of the philosophers' problem, a classic text in 
the development of critical sections. Not a lot of people know that. 
Mostly because I just made it up.

[4] That's this week's dig at both the Twisted community. I cheerfully 
admit that although this assertion is true it's a simple attempt to give 
currency to a scurrilous fabrication. I'm hoping I can persuade the 
Twisted crew to descend on PyCon /en masse/, thereby multiplying the fun 
quotient by 1.43.

[5] Including thread.Lock and Queue.Queue - I bet you're getting sorry 
you asked now, aren't you?

[6] Which will be the year after 2007, which will be the year of Python 
3000. Or not, depending on whether Py3k is a dead parrot or not. I'm 
personally betting on "not". But not necessarily on 2007.
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"sturlamolden" <[EMAIL PROTECTED]> writes:
|> 
|> In any case, this means that Python can happily keep its GIL, as the
|> CPU bound 'HPC' tasks for which the GIL does matter should be done
|> using multiple processes (not threads) anyway. That leaves threads as a
|> tool for programming certain i/o tasks and maintaining 'responsive'
|> user interfaces, for which the GIL incidentally does not matter.

Yes.  That is the approach being taken at present by almost everyone.

|> I wonder if too much emphasis is put on thread programming these days.
|> Threads may be nice for programming web servers and the like, but not
|> for numerical computing. Reading books about thread programming, one
|> can easily get the impression that it is 'the' way to parallelize
|> numerical tasks on computers with multiple CPUs (or multiple CPU
|> cores). But if threads are inherently designed and implemented to stay
|> idle most of the time, that is obviously not the case.

You have to distinguish "lightweight processes" from "POSIX threads"
from the generic concept.  It is POSIX and Microsoft threads that are
inherently like that, and another kind of thread model might be very
different.  Don't expect to see one provided any time soon, even by
Linux.

OpenMP is the current leader for SMP parallelism, and it would be
murder to produce a Python binding that had any hope of delivering
useful performance.  I think that it could be done, but implementing
the result would be a massive task.  The Spruce Goose and Project
Habbakuk (sic) spring to my mind, by comparison[*] :-)

|> I like MPI. Although it is a huge API with lots of esoteric functions,
|> I only need to know a handfull to cover my needs. Not to mention the
|> fact that I can use MPI with Fortran, which is frowned upon by computer
|> scientists but loved by scientists and engineers specialized in any
|> other field.

Yup.  MPI is also debuggable and tunable (with difficulty).  Debugging
and tuning OpenMP and POSIX threads are beyond anyone except the most
extreme experts; I am only on the borderline of being able to.

The ASCI bunch favour Co-array Fortran, and its model matches Python
like a steam turbine is a match for a heart transplant.


[*] They are worth looking up, if you don't know about them.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
Paul Rubin  writes:
|>
|> > Yes, I know that it is a bit Irish for the best way to use a shared
|> > memory system to be to not share memory, but that's how it is.
|> 
|> But I thought serious MPI implementations use shared memory if they
|> can.  That's the beauty of it, you can run your application on SMP
|> processors getting the benefit of shared memory, or split it across
|> multiple machines using ethernet or infiniband or whatever, without
|> having to change the app code.

They use it for the communication, but don't expose it to the
programmer.  It is therefore easy to put the processes on different
CPUs, and get the memory consistency right.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Sergei Organov
[EMAIL PROTECTED] (Nick Maclaren) writes:
> In article <[EMAIL PROTECTED]>,
> "sturlamolden" <[EMAIL PROTECTED]> writes:
[...]
> |> I wonder if too much emphasis is put on thread programming these days.
> |> Threads may be nice for programming web servers and the like, but not
> |> for numerical computing. Reading books about thread programming, one
> |> can easily get the impression that it is 'the' way to parallelize
> |> numerical tasks on computers with multiple CPUs (or multiple CPU
> |> cores). But if threads are inherently designed and implemented to stay
> |> idle most of the time, that is obviously not the case.
>
> You have to distinguish "lightweight processes" from "POSIX threads"
> from the generic concept.  It is POSIX and Microsoft threads that are
> inherently like that,

Do you mean that POSIX threads are inherently designed and implemented
to stay idle most of the time?! If so, I'm afraid those guys that
designed POSIX threads won't agree with you. In particular, as far as I
remember, David R. Butenhof said a few times in comp.programming.threads
that POSIX threads were primarily designed to meet parallel programming
needs on SMP, or at least that was how I understood him.

-- Sergei.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: An iterator with look-ahead

2007-01-10 Thread Paddy

Neil Cerutti wrote:

> On 2007-01-10, Steven Bethard <[EMAIL PROTECTED]> wrote:
> > Neil Cerutti wrote:
> >> For use in a hand-coded parser I wrote the following simple
> >> iterator with look-ahead.
> >
> > There's a recipe for this:
> >
> >http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/304373
> >
> > Note that the recipe efficiently supports an arbitrary
> > look-ahead, not just a single item.
> >
> >> I haven't thought too deeply about what peek ought to return
> >> when the iterator is exhausted. Suggestions are respectfully
> >> requested.
> >
> > In the recipe, StopIteration is still raised on a peek()
> > operation that tries to look past the end of the iterator.
>
> That was all I could think of as an alternative, but that makes
> it fairly inconvenient to use. I guess another idea might be to
> allow user to provide a "no peek" return value in the
> constructor, if they so wish.
>
> --
> Neil Cerutti
You could raise a different Exception, PeekPastEndEception ?
- Paddy.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
Sergei Organov <[EMAIL PROTECTED]> writes:
|> 
|> Do you mean that POSIX threads are inherently designed and implemented
|> to stay idle most of the time?! If so, I'm afraid those guys that
|> designed POSIX threads won't agree with you. In particular, as far as I
|> remember, David R. Butenhof said a few times in comp.programming.threads
|> that POSIX threads were primarily designed to meet parallel programming
|> needs on SMP, or at least that was how I understood him.

I do mean that, and I know that they don't agree.  However, the word
"designed" doesn't really make a lot of sense for POSIX threads - the
one I tend to use is "perpetrated".

The people who put the specification together were either unaware of
most of the experience of the previous 30 years, or chose to ignore it.
In particular, in this context, the importance of being able to control
the scheduling was well-known, as was the fact that it is NOT possible
to mix processes with different scheduling models on the same set of
CPUs.  POSIX's facilities are completely hopeless for that purpose, and
most of the systems I have used effectively ignore them.

I could go on at great length, and the performance aspects are not even
the worst aspect of POSIX threads.  The fact that there is no usable
memory model, and the synchronisation depends on C to handle the
low-level consistency, but there are no CONCEPTS in common between
POSIX and C's memory consistency 'specifications' is perhaps the worst.
That is why many POSIX threads programs work until the genuinely
shared memory accesses become frequent enough that you get some to the
same location in a single machine cycle.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie - converting csv files to arrays in NumPy - Matlab vs. Numpy comparison

2007-01-10 Thread oyekomova
Thanks for your help. I compared the following code in NumPy with the
csvread in Matlab for a very large csv file. Matlab read the file in
577 seconds. On the other hand, this code below kept running for over 2
hours. Can this program be made more efficient? FYI - The csv file was
a simple 6 column file with a header row and more than a million
records.


import csv
from numpy import array
import time
t1=time.clock()
file_to_read = file('somename.csv','r')
read_from = csv.reader(file_to_read)
read_from.next()

datalist = [ map(float, row[:]) for row in read_from ]

# now the real data
data = array(datalist, dtype = float)

elapsed=time.clock()-t1
print elapsed












Robert Kern wrote:
> oyekomova wrote:
> > I would like to know how to convert a csv file with a header row into a
> > floating point array without the header row.
>
> Use the standard library module csv. Something like the following is a cheap 
> and
> cheerful solution:
>
>
> import csv
> import numpy
>
> def float_array_from_csv(filename, skip_header=True):
> f = open(filename)
> try:
> reader = csv.reader(f)
> floats = []
> if skip_header:
> reader.next()
> for row in reader:
> floats.append(map(float, row))
> finally:
> f.close()
>
> return numpy.array(floats)
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
>  that is made terrible by our own mad attempt to interpret it as though it had
>  an underlying truth."
>   -- Umberto Eco

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic library loading, missing symbols

2007-01-10 Thread [EMAIL PROTECTED]
So, I did some searching and using python -v to run my code, I was able
to see that my module was loaded by the python interpreter using
dlopen("lib", 2).

2 is the RTLD_NOW flag, meaning that RTLD_LOCAL is assumed by the
system.

I suppose this means that any subsequent libraries dlopened will not
see any of the symbols in my module?

I guess I'll have to look through the Python documentation to see if
they offer any work arounds to this problem.

~Doug

On Jan 10, 9:38 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> > Did you verify, using nm -D, that the symbol is indeed present in
> > the shared object, not just in the source code?Yes, the symbol is found in 
> > the shared object when using nm.
>
> > What flags are given to that dlopen call?dlopen(lib, RTLD_NOW | 
> > RTLD_GLOBAL);
>
> > No. The dynamic linker doesn't search files to resolve symbols; instead,
> > it searches the process' memory. It first loads the referenced shared
> > libraries (processing the DT_NEEDED records in each one); that uses
> > the LD_LIBRARY_PATH. Then, symbol resolution needs to find everything
> > in the libraries that have already been loaded.Ok, so if the linker is 
> > searching the process address space, then I
> suppose what really comes into play here is how the Python interpreter
> dynamically loaded my module, a module which is in turn calling code
> that does the dlopen above. If the Python interpreter is not loading
> my library as global, does that mean the linker will not find them
> when subsequent libraries are loaded?
>
> > There are various ways to control which subset of in-memory symbols
> > the dynamic linker considers: the RTLD_GLOBAL/RTLD_LOCAL flags play
> > a role, the -Bsymbolic flag given to the static linker has an impact,
> > and so does the symbol visibility (hidden/internal/protected).Ok, any other 
> > suggestions for things to try? Since all of the dlopen
> calls happen in 3rd party code, I won't really be able to modify them.
> I will try looking into more compile time flags for the linker and see
> what I can come up with. Keep in mind that this code works when it is
> all C++. Only when I add the Boost.Python wrapper to my code and then
> import in Python do I get this symbol error.
> 
> Thank you for your help,
> ~Doug

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Announcement -- ZestyParser

2007-01-10 Thread Istvan Albert

Terry Reedy wrote:

> I would like to look at the examples online (along with the doc file)
> before downloading this and setuptools and installing, so I can see what
> it actually looks like in use.

Yes, I second that.

I poked around for a little while but could not find a single example.
Then I gave up as I'm not going to install something before knowing
what it is.

i.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Read CSV file into an array

2007-01-10 Thread Tobiah
oyekomova wrote:
> I would like to know how to read a CSV file with a header ( n columns
> of float data) into an array without the header row.
> 
import csv

l = []
for line in csv.reader(open("my.csv").readlines()[1:]):
 l.append(line)

Which really gets you a list of lists.

-- 
Posted via a free Usenet account from http://www.teranews.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie - converting csv files to arrays in NumPy - Matlab vs. Numpy comparison

2007-01-10 Thread sturlamolden

oyekomova wrote:
> Thanks for your help. I compared the following code in NumPy with the
> csvread in Matlab for a very large csv file. Matlab read the file in
> 577 seconds. On the other hand, this code below kept running for over 2
> hours. Can this program be made more efficient? FYI - The csv file was
> a simple 6 column file with a header row and more than a million
> records.
>
>
> import csv
> from numpy import array
> import time
> t1=time.clock()
> file_to_read = file('somename.csv','r')
> read_from = csv.reader(file_to_read)
> read_from.next()

> datalist = [ map(float, row[:]) for row in read_from ]

I'm willing to bet that this is your problem. Python lists are arrays
under the hood!

Try something like this instead:


# read the whole file in one chunk
lines = file_to_read.readlines()
# count the number of columns
n = 1
for c in lines[1]:
   if c == ',': n += 1
# count the number of rows
m = len(lines[1:])
#allocate
data = empty((m,n),dtype=float)
# create csv reader, skip header
reader = csv.reader(lines[1:])
# read
for i in arange(0,m):
   data[i,:] = map(float,reader.next())

And if this is too slow, you may consider vectorizing the last loop:

data = empty((m,n),dtype=float)
newstr = ",".join(lines[1:])
flatdata = data.reshape((n*m)) # flatdata is a view of data, not a copy
reader = csv.reader([newstr])
flatdata[:] = map(float,reader.next())

I hope this helps!








> Robert Kern wrote:
> > oyekomova wrote:
> > > I would like to know how to convert a csv file with a header row into a
> > > floating point array without the header row.
> >
> > Use the standard library module csv. Something like the following is a 
> > cheap and
> > cheerful solution:
> >
> >
> > import csv
> > import numpy
> >
> > def float_array_from_csv(filename, skip_header=True):
> > f = open(filename)
> > try:
> > reader = csv.reader(f)
> > floats = []
> > if skip_header:
> > reader.next()
> > for row in reader:
> > floats.append(map(float, row))
> > finally:
> > f.close()
> >
> > return numpy.array(floats)
> >
> > --
> > Robert Kern
> >
> > "I have come to believe that the whole world is an enigma, a harmless enigma
> >  that is made terrible by our own mad attempt to interpret it as though it 
> > had
> >  an underlying truth."
> >   -- Umberto Eco

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about compiling.

2007-01-10 Thread Rob Wolfe
Gabriel Genellina <[EMAIL PROTECTED]> writes:

> At Tuesday 9/1/2007 14:56, Steven W. Orr wrote:
>
>>I *just* read the tutorial so please be gentle. I created a file called
>>fib.py which works very nicely thank you. When I run it it does what it's
>>supposed to do but I do not get a resulting .pyc file. The tutorial says I
>>shouldn't do anything special to create it. I have machines that have both
>>2.4.1 and 2.3.5. Does anyone have an idea what to do?
>
> Welcome to Python!
> When you run a script directly, no .pyc file is generated. Only when a
> module is imported (See section 6.1.2 on the tutorial). And don't
> worry about it...

That's not the whole truth. :)
If you want to compile your script, you can do that, of course:

$ ls fib*
fib.py
$ python2.4 -mpy_compile fib.py
$ ls fib*
fib.py  fib.pyc

-- 
HTH,
Rob
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Carl J. Van Arsdall
Just as something to note, but many HPC applications will use a 
combination of both MPI and threading (OpenMP usually, as for the 
underlying thread implementation i don't have much to say).  Its 
interesting to see on this message board this huge "anti-threading" 
mindset, but the HPC community seems to be happy using a little of both 
depending on their application and the topology of their parallel 
machine.  Although if I was doing HPC applications, I probably would not 
choose to use Python but I would write things in C or FORTRAN. 

What I liked about python threads was that they were easy whereas using 
processes and IPC is a real pain in the butt sometimes.  I don't 
necessarily think this module is the end-all solution to all of our 
problems but I do think that its a good thing and I will toy with it 
some in my spare time.  I think that any effort to making python 
threading better is a good thing and I'm happy to see the community 
attempt to make improvements.  It would also be cool if this would be 
open sourced and I'm not quite sure why its not.

-carl
 

-- 

Carl J. Van Arsdall
[EMAIL PROTECTED]
Build and Release
MontaVista Software

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parallel Python

2007-01-10 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"Carl J. Van Arsdall" <[EMAIL PROTECTED]> writes:
|> 
|> Just as something to note, but many HPC applications will use a 
|> combination of both MPI and threading (OpenMP usually, as for the 
|> underlying thread implementation i don't have much to say).  Its 
|> interesting to see on this message board this huge "anti-threading" 
|> mindset, but the HPC community seems to be happy using a little of both 
|> depending on their application and the topology of their parallel 
|> machine.  Although if I was doing HPC applications, I probably would not 
|> choose to use Python but I would write things in C or FORTRAN. 

That is a commonly quoted myth.

Some of the ASCI community did that, but even they have backed off
to a great extent.  Such code is damn near impossible to debug, let
alone tune.  To the best of my knowledge, no non-ASCI application
has ever done that, except for virtuosity.  I have several times
asked claimants to name some examples of code that does that and is
used in the general research community, and have so far never had a
response.

I managed the second-largest HPC system in UK academia for a decade,
ending less than a year ago, incidentally, and was and am fairly well
in touch with what is going on in HPC world-wide.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about compiling.

2007-01-10 Thread tac-tics
> That's not the whole truth. :)

The whole truth is that from a developer's POV, .pyc files are
unimportant.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (newbie) Is there a way to prevent "name redundancy" in OOP ?

2007-01-10 Thread Carl Banks
Martin Miller wrote:
> Carl Banks wrote:

> > Because the usage deceptively suggests that it defines a name in the
> > local namespace.  Failing may be too strong a word, but I've come to
> > expect a consistent behavior w.r.t. namespaces, which this violates, so
> > I think it qualifies as a failure.
>
> I don't see how the usage deceptively suggests this at all. In this
> case -- your sample code for fun() and fun2() -- all were simply
> Pin('aap'). Since no additional namespace argument was supplied,

Exactly.  In normal Python, to create a global variable in a local
context, you must specify the namespace.  Normally, variables are
always created in the most local context in Python, only.  So now you
come in with this this spiffy "look, it creates the variable for you!"
class, but it completely goes against normal Python behavior by
creating the variable in the global namespace even when in a local
context.  I think that is deceptive and confusing behvaior for
something that claims to create variables for you.


> > I think programmatically creating variables is fine; I just recommend
> > you not use sys._getframe, nor the automagical namespace self-insertion
> > class, to do it.
>
> You've explained some of your worries about sys._getframe. It would
> be interesting to hear specifically what it is you don't like about
> the idea of namespace self-insertion -- mainly because of the local
> namespace limitation?

The local namespace thing has nothing to do with it; I would still be
very much against this even if it did work for locals.  My main problem
with this is it saddles the class with behavior that interferes with
using it as a normal class.  For instance, if you wanted to do
something like this:

def fun(pin_name):
p = Pin(pin_name)
do_something_with(p)

At least it looks like it works!  But it works only with the
undesriable side-effect of creating a global variable of some unknown
name.  It could be no big deal, or it could be a gaping security hole.
(Yes, I know you can just pass in an empty namespace.  Thanks for that.
 Nice to know it's at least possible to jump though hoops just to
regain normal usage of the class.)

The problem here, see, is the hubris of the author in thinking that he
can anticipate all possble uses of a class (or at least that he can
presume that no user will ever want to use the class in a normal way).
Of course, no author can ever anticipate all possible uses of their
code, and it's inevitable that users will want to use code in a way the
author didn't intend.  Most often it's not any hubris on the author's
part, but mere ignorance, and is forgivable.

But when the author deliberately does extra work to cut the user off
from normal usage, then ignorance is no longer a defense: the normal
way was considered, and the author sanctimoniously decided on the
user's behalf that the user would never want or need normal usage of
the class.  That is a lot less forgivable.

If you want to respect the user, your class is going to have to change.

1. Reprogram the class so that, BY DEFAULT, it works normally, and only
inserts itself into a namespace when specifically requested.  Then, at
least, users can ignore the autoinsertion silliness if they don't want
to use it.  Plus, it affords better documentation to readers who don't
know aren't in on the secret of this Pin class (seeing an
"autoinsert=True" passed to the constructor is a clue something's going
on).

2. (Better, IMO) Write a well-named function to do it.  E.g.:

def create_global_Pin_variable(name,namespace=None):
if namespace is None:
namespace = sys._getframe(1).f_globals
namespace[name] = Pin(name)

Leave it out of the class.  Classes are better off when they only worry
about what's going on in their own namespaces.  Leave dealing with
external namespaces to an external function.  The function's name
documents the fact that: you're creating a variable, it's global, it's
a Pin.  You've gotten surprise down to a small as you're going to get
it.


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Internet Survey

2007-01-10 Thread krw
In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...
> In article <[EMAIL PROTECTED]>,
>Lefty Bigfoot <[EMAIL PROTECTED]> wrote:
> >On Wed, 10 Jan 2007 08:28:33 -0600, [EMAIL PROTECTED] wrote
> >(in article <[EMAIL PROTECTED]>):
> >
> >> In article <[EMAIL PROTECTED]>,
> >>"Elan Magavi" <[EMAIL PROTECTED]> wrote:
> >>> Is that like.. OctaPussy?
> >> 
> >> I didn't read their stuff.  Are they really trying to put a 
> >> round peg in a square hole?
> >
> >Sounds more like an octagonal pole in a round hole.
> 
> Nah, I figured they picked the word octal because it's never
> been used before.

...and "HexaPussy" just wouldn't be right.

-- 
  Keith
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python - C# interoperability

2007-01-10 Thread mc
Thanks to all who responded.  It appears that there may be a solution
as follows:

use jythonc to turn Python program into Java bytecode

use Microsoft's jbimp to turn Java bytecode into .NET DLL

It sounds roundabout, but I am investigating.

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >