Re: Is Python a commercial proposition ?

2012-08-01 Thread Mark Lawrence

On 01/08/2012 00:31, David wrote:

On 01/08/2012, lipska the kat  wrote:

On 31/07/12 14:52, David wrote:


[1] as in beer
[2] for research purposes


There's one (as in 1 above) in the pump for you.


Great, more beer => better research => \o/\o/\o/
But, "pump" sounds a bit extreme .. I usually sip contentedly from a glass :p



You complete ignoramus, if it gets poured in advance that's no good to 
anybody as it'll go flat.  Has to stay in the pump until you're ready to 
drink it from the glass.  Don't you know anything about the importance 
of process and timing? :)


--
Cheers.

Mark Lawrence.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy


As I wrote "I found many nice things (Pipe, Manager and so on), but
actually even
this seems to work:" yes I did read the documentation.

Sorry, I did not want be offensive.


I was just surprised that it worked better than I expected even
without Pipes and Queues, but now I understand why..

Anyway now I would like to be able to detach subprocesses to avoid the
nasty code reloading that I was talking about in another thread, but
things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?

Yes, I think that is correct. Instead of detaching a child process, you 
can create independent processes and use other frameworks for IPC. For 
example, Pyro.  It is not as effective as multiprocessing.Queue, but in 
return, you will have the option to run your service across multiple 
servers.


The most effective IPC is usually through shared memory. But there is no 
OS independent standard Python module that can communicate over shared 
memory. Except multiprocessing of course, but AFAIK it can only be used 
to communicate between fork()-ed processes.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python a commercial proposition ?

2012-08-01 Thread lipska the kat

On 01/08/12 09:06, Mark Lawrence wrote:

On 01/08/2012 00:31, David wrote:

On 01/08/2012, lipska the kat  wrote:

On 31/07/12 14:52, David wrote:


[1] as in beer
[2] for research purposes


There's one (as in 1 above) in the pump for you.


Great, more beer => better research => \o/\o/\o/
But, "pump" sounds a bit extreme .. I usually sip contentedly from a
glass :p



You complete ignoramus, if it gets poured in advance that's no good to
anybody as it'll go flat. Has to stay in the pump until you're ready to
drink it from the glass. Don't you know anything about the importance of
process and timing? :)



Heh heh, obviously never got drunk ... er I mean served behind the bar 
at uni/college/pub %-}


lipska

--
Lipska the Kat: Troll hunter, sandbox destroyer
and farscape dreamer of Aeryn Sun
--
http://mail.python.org/mailman/listinfo/python-list


Re: my email

2012-08-01 Thread BJ Swope
I would also recommend changing your birthday as well ;)


--
"The end of democracy and the defeat of the American Revolution will occur
when government falls into the hands of lending institutions and moneyed
incorporations."
-- Thomas Jefferson

The whole world is a comedy to those that think, a tragedy to those that
feel.  ---Horace Walpole





On Sat, Jul 21, 2012 at 6:36 AM, Maria Hanna Carmela Dionisio <
[email protected]> wrote:

> lolz sorry i already change it..just a newbhie, that's why :Dv
>   --
> *From:* MRAB 
> *To:* [email protected]
> *Sent:* Wednesday, July 18, 2012 10:08 AM
> *Subject:* Re: my email
>
> On 18/07/2012 02:44, Maria Hanna Carmela Dionisio wrote:
> > [email protected]
> >
> > Just a newbhie here :>
> >
> ...who has just revealed her password!
>
> [remainder snipped]
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
>> I was just surprised that it worked better than I expected even
>> without Pipes and Queues, but now I understand why..
>>
>> Anyway now I would like to be able to detach subprocesses to avoid the
>> nasty code reloading that I was talking about in another thread, but
>> things get more tricky, because I can't use queues and pipes to
>> communicate with a running process that it's noit my child, correct?
>>
> Yes, I think that is correct. Instead of detaching a child process, you can
> create independent processes and use other frameworks for IPC. For example,
> Pyro.  It is not as effective as multiprocessing.Queue, but in return, you
> will have the option to run your service across multiple servers.
>
> The most effective IPC is usually through shared memory. But there is no OS
> independent standard Python module that can communicate over shared memory.
> Except multiprocessing of course, but AFAIK it can only be used to
> communicate between fork()-ed processes.


Thanks, there is another thing which is able to interact with running
processes in theory:
https://github.com/lmacken/pyrasite

I don't know though if it's a good idea to use a similar approach for
production code, as far as I understood it uses gdb..  In theory
though I could be able to set up every subprocess with all the data
they need, so I might not even need to share data between them.

Anyway now I had another idea to avoid to be able to stop the main
process without killing the subprocesses, using multiple forks.  Does
the following makes sense?  I don't really need these subprocesses to
be daemons since they should quit when done, but is there anything
that can go wrong with this approach?

from os import fork
from time import sleep
from itertools import count
from sys import exit

from multiprocessing import Process, Queue

class LongProcess(Process):
def __init__(self, idx, queue):
Process.__init__(self)
# self.daemon = True
self.queue = queue
self.idx = idx

def run(self):
for i in count():
self.queue.put("%d: %d"  % (self.idx, i))
print("adding %d: %d"  % (self.idx, i))
sleep(2)


if __name__ == '__main__':
qu = Queue()

# how do I do a multiple fork?
for i in range(5):
pid = fork()
# if I create here all the data structures I should still be
able to do things
if pid == 0:
lp = LongProcess(1, qu)
lp.start()
lp.join()
exit(0)
else:
print("started subprocess with pid ", pid)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why the different output in Eclipse and Python Shell?

2012-08-01 Thread Dave Angel
On 08/01/2012 12:45 AM, levi nie wrote:
> my code in Eclipse:
>
> dict.fromkeys(['China','America'])
> print "dict is",dict
>
> output: dict is 
>
> my code in Python Shell:
>
> dict.fromkeys(['China','America'])
>
> output:{'America': None, 'China': None}
>
> Output in Python Shell is what i wanna,but why not in Eclipse?
>
>

The Python Shell is an interactive debugger, and prints the repr() of
expressions that you don't assign anywhere.  I don't know Eclipse, but I
suspect what you want to do is something like:

print "dict is", repr(dict)



-- 

DaveA

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy




Thanks, there is another thing which is able to interact with running
processes in theory:
https://github.com/lmacken/pyrasite

I don't know though if it's a good idea to use a similar approach for
production code, as far as I understood it uses gdb..  In theory
though I could be able to set up every subprocess with all the data
they need, so I might not even need to share data between them.

Anyway now I had another idea to avoid to be able to stop the main
process without killing the subprocesses, using multiple forks.  Does
the following makes sense?  I don't really need these subprocesses to
be daemons since they should quit when done, but is there anything
that can go wrong with this approach?
On thing is sure: os.fork() doesn't work under Microsoft Windows. Under 
Unix, I'm not sure if os.fork() can be mixed with 
multiprocessing.Process.start(). I could not find official documentation 
on that.  This must be tested on your actual platform. And don't forget 
to use Queue.get() in your test. :-)


--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
> On thing is sure: os.fork() doesn't work under Microsoft Windows. Under
> Unix, I'm not sure if os.fork() can be mixed with
> multiprocessing.Process.start(). I could not find official documentation on
> that.  This must be tested on your actual platform. And don't forget to use
> Queue.get() in your test. :-)
>

Yes I know we don't care about Windows for this particular project..
I think mixing multiprocessing and fork should not harm, but probably
is unnecessary since I'm already in another process after the fork so
I can just make it run what I want.

Otherwise is there a way to do same thing only using multiprocessing?
(running a process that is detachable from the process that created it)
-- 
http://mail.python.org/mailman/listinfo/python-list


CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
We're having some really obscure problems with gzip.
There is a program running with python2.7 on a 2.6.18-128.el5xen (red
hat I think) kernel.

Now this program does the following:
if filename == 'out2.txt':
 out2 = open('out2.txt')
elif filename == 'out2.txt.gz'
 out2 = open('out2.txt.gz')

text = out2.read()

out2.close()

very simple right? But sometimes we get a checksum error.
Reading the code I got the following:

 - CRC is at the end of the file and is computed against the whole
file (last 8 bytes)
 - after the CRC there is the \ marker for the EOF
 - readline() doesn't trigger the checksum generation in the
beginning, but only when the EOF is reached
 - until a file is flushed or closed you can't read the new content in it

but the problem is that we can't reproduce it, because doing it
manually on the same files it works perfectly,
and the same files some time work some time don't work.

The files are on a shared NFS drive, I'm starting to think that it's a
network/fs problem, which might truncate the file
adding an EOF before the end and thus making the checksum fail..
But is it possible?
Or what else could it be?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy




Yes I know we don't care about Windows for this particular project..
I think mixing multiprocessing and fork should not harm, but probably
is unnecessary since I'm already in another process after the fork so
I can just make it run what I want.

Otherwise is there a way to do same thing only using multiprocessing?
(running a process that is detachable from the process that created it)

I'm afraid there is no way to do that. I'm not even sure if 
multiprocessing.Queue will work if you detach a forked process.

--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy

On 2012-08-01 12:39, andrea crotti wrote:

We're having some really obscure problems with gzip.
There is a program running with python2.7 on a 2.6.18-128.el5xen (red
hat I think) kernel.

Now this program does the following:
if filename == 'out2.txt':
  out2 = open('out2.txt')
elif filename == 'out2.txt.gz'
  out2 = open('out2.txt.gz')

Gzip file is binary. You should open it in binary mode.

out2 = open('out2.txt.gz',"b")

Otherwise carriage return and newline characters will be converted (depending 
on the platform).


--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
> On 2012-08-01 12:39, andrea crotti wrote:
>>
>> We're having some really obscure problems with gzip.
>> There is a program running with python2.7 on a 2.6.18-128.el5xen (red
>> hat I think) kernel.
>>
>> Now this program does the following:
>> if filename == 'out2.txt':
>>   out2 = open('out2.txt')
>> elif filename == 'out2.txt.gz'
>>   out2 = open('out2.txt.gz')
>
> Gzip file is binary. You should open it in binary mode.
>
> out2 = open('out2.txt.gz',"b")
>
> Otherwise carriage return and newline characters will be converted
> (depending on the platform).
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list


Ah no sorry I just wrote wrong that part of the code, it was
otu2 = gzip.open('out2.txt.gz') because otherwise nothing would possibly work..
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Roy Smith
In article ,
 Laszlo Nagy  wrote:

> Yes, I think that is correct. Instead of detaching a child process, you 
> can create independent processes and use other frameworks for IPC. For 
> example, Pyro.  It is not as effective as multiprocessing.Queue, but in 
> return, you will have the option to run your service across multiple 
> servers.

You might want to look at beanstalk (http://kr.github.com/beanstalkd/).  
We've been using it in production for the better part of two years.  At 
a 30,000 foot level, it's an implementation of queues over named pipes 
over TCP, but it takes care of a zillion little details for you.

Setup is trivial, and there's clients for all sorts of languages.  For a 
Python client, go with beanstalkc (pybeanstalk appears to be 
abandonware).
> 
> The most effective IPC is usually through shared memory. But there is no 
> OS independent standard Python module that can communicate over shared 
> memory.

It's true that shared memory is faster than serializing objects over a 
TCP connection.  On the other hand, it's hard to imagine anything 
written in Python where you would notice the difference.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy



The most effective IPC is usually through shared memory. But there is no
OS independent standard Python module that can communicate over shared
memory.

It's true that shared memory is faster than serializing objects over a
TCP connection.  On the other hand, it's hard to imagine anything
written in Python where you would notice the difference.

Well, except in response times. ;-)

The TCP stack likes to wait after you call send() on a socket. Yes, you 
can use setsockopt/TCP_NOWAIT, but my experience is that response times 
with TCP can be long, especially when you have to do many 
request-response pairs.


It also depends on the protocol design - if you can reduce the number of 
request-response pairs then it helps a lot.

--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy



very simple right? But sometimes we get a checksum error.

Do you have a traceback showing the actual error?



  - CRC is at the end of the file and is computed against the whole
file (last 8 bytes)
  - after the CRC there is the \ marker for the EOF
  - readline() doesn't trigger the checksum generation in the
beginning, but only when the EOF is reached
  - until a file is flushed or closed you can't read the new content in it
How do you write the file? Is it written from another Python program? 
Can we see the source code of that?


but the problem is that we can't reproduce it, because doing it
manually on the same files it works perfectly,
and the same files some time work some time don't work.
The problem might be with the saved file. Once you get an error for a 
given file, can you reproduce the error using the same file?


The files are on a shared NFS drive, I'm starting to think that it's a
network/fs problem, which might truncate the file
adding an EOF before the end and thus making the checksum fail..
But is it possible?
Or what else could it be?

Can your try to run the same program on a local drive?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy

On 2012-08-01 12:59, Roy Smith wrote:

In article ,
  Laszlo Nagy  wrote:


Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
example, Pyro.  It is not as effective as multiprocessing.Queue, but in
return, you will have the option to run your service across multiple
servers.

You might want to look at beanstalk (http://kr.github.com/beanstalkd/).
We've been using it in production for the better part of two years.  At
a 30,000 foot level, it's an implementation of queues over named pipes
over TCP, but it takes care of a zillion little details for you.

Looks very simple to use. Too bad that it doesn't work on Windows systems.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python a commercial proposition ?

2012-08-01 Thread David
On 01/08/2012, lipska the kat  wrote:
> On 01/08/12 09:06, Mark Lawrence wrote:
>>
>> You complete ignoramus, if it gets poured in advance that's no good to
>> anybody as it'll go flat. Has to stay in the pump until you're ready to
>> drink it from the glass. Don't you know anything about the importance of
>> process and timing? :)
>
> Heh heh, obviously never got drunk ... er I mean served behind the bar
> at uni/college/pub %-}

Nah, obviously *is* drunk ;p
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python a commercial proposition ?

2012-08-01 Thread Stefan Behnel
David, 01.08.2012 13:59:
> On 01/08/2012, lipska the kat wrote:
>> On 01/08/12 09:06, Mark Lawrence wrote:
>>>
>>> You complete ignoramus, if it gets poured in advance that's no good to
>>> anybody as it'll go flat. Has to stay in the pump until you're ready to
>>> drink it from the glass. Don't you know anything about the importance of
>>> process and timing? :)
>>
>> Heh heh, obviously never got drunk ... er I mean served behind the bar
>> at uni/college/pub %-}
> 
> Nah, obviously *is* drunk ;p

Would you mind taking this slightly off-topic discussion off the list?

Thanks.

Stefan


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: EXTERNAL: Re: missing python-config and building python on Windows

2012-08-01 Thread Damon Register

On 7/31/2012 11:49 PM, Mark Hammond wrote:

On 1/08/2012 10:48 AM, Damon Register wrote:

1. though I have looked in a few readme files, I don't see instructions for
installing what I have just built using MSVC.  Where can I find the
instructions for installing after building with MSVC?


There is no such process.  In general, you can just run directly from the built 
tree.

That is a bummer.  That makes me more curious about how the Windows
installer was made and how all the pieces were gathered together.


I'm afraid I don't know what python-config is.  It appears it might be a 
reflection of how Python
was configured and build on *nix systems - if that is the case then it is 
expected that one does not
exist for Windows (as it doesn't use the *nix build chain).

which means, I guess, that mingw is barely supported if at all.
While it may be Windows, mingw/msys is a nice way to build many
programs that are unix oriented.  I suppose that just for fun I
should try to build python on SuSE to see how it goes.


3. It seems that MSVC doesn't produce the .a library files needed for
linking
into a mingw built program.  Do I have to do that fun trick to
create the
.a from the dll?


I'm surprised MSVC *can* build .a files for mingw - but AFAIK, even if MSVC 
could do that, I believe
Python makes no attempt to build with support for linking into mingw programs.

I don't know that MSVC can do this.  The only process of which I am aware is a
two step process using pexports and dlltool to generate the .a file from a dll.
One reason I was using the python.org installer is that it already had the
python27.a file.  Now I am even more curious about what was used to build python
and create that installer.

The python.org installer provided all I needed for build most python dependent
apps with mingw until I ran into one that needed python-config.  I suppose that
if python-config does what I suspect it does (produce cflags and ldflags as
does pkg-config) then perhaps I could just fake it by replacing use of
python-config with what the cflags and ldflags should be for where I have
python.

Damon Register

--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
Full traceback:

Exception in thread Thread-8:
Traceback (most recent call last):
  File "/user/sim/python/lib/python2.7/threading.py", line 530, in
__bootstrap_inner
self.run()
  File "/user/sim/tests/llif/AutoTester/src/AutoTester2.py", line 67, in run
self.processJobData(jobData, logger)
  File "/user/sim/tests/llif/AutoTester/src/AutoTester2.py", line 204,
in processJobData
self.run_simulator(area, jobData[1] ,log)
  File "/user/sim/tests/llif/AutoTester/src/AutoTester2.py", line 142,
in run_simulator
report_file, percentage, body_text = SimResults.copy_test_batch(log, area)
  File "/user/sim/tests/llif/AutoTester/src/SimResults.py", line 274,
in copy_test_batch
out2_lines = out2.read()
  File "/user/sim/python/lib/python2.7/gzip.py", line 245, in read
self._read(readsize)
  File "/user/sim/python/lib/python2.7/gzip.py", line 316, in _read
self._read_eof()
  File "/user/sim/python/lib/python2.7/gzip.py", line 338, in _read_eof
hex(self.crc)))
IOError: CRC check failed 0x4f675fba != 0xa9e45aL


- The file is written with the linux gzip program.
- no I can't reproduce the error with the same exact file that did
failed, that's what is really puzzling,
  there seems to be no clear pattern and just randmoly fails. The file
is also just open for read from this program,
  so in theory no way that it can be corrupted.

  I also checked with lsof if there are processes that opened it but
nothing appears..

- can't really try on the local disk, might take ages unfortunately
(we are rewriting this system from scratch anyway)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread andrea crotti
2012/8/1 Roy Smith :
> In article ,
>  Laszlo Nagy  wrote:
>
>> Yes, I think that is correct. Instead of detaching a child process, you
>> can create independent processes and use other frameworks for IPC. For
>> example, Pyro.  It is not as effective as multiprocessing.Queue, but in
>> return, you will have the option to run your service across multiple
>> servers.
>
> You might want to look at beanstalk (http://kr.github.com/beanstalkd/).
> We've been using it in production for the better part of two years.  At
> a 30,000 foot level, it's an implementation of queues over named pipes
> over TCP, but it takes care of a zillion little details for you.
>
> Setup is trivial, and there's clients for all sorts of languages.  For a
> Python client, go with beanstalkc (pybeanstalk appears to be
> abandonware).
>>
>> The most effective IPC is usually through shared memory. But there is no
>> OS independent standard Python module that can communicate over shared
>> memory.
>
> It's true that shared memory is faster than serializing objects over a
> TCP connection.  On the other hand, it's hard to imagine anything
> written in Python where you would notice the difference.
> --
> http://mail.python.org/mailman/listinfo/python-list


That does look nice and I would like to have something like that..
But since I have to convince my boss of another external dependency I
think it might be worth
to try out zeromq instead, which can also do similar things and looks
more powerful, what do you think?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy


- The file is written with the linux gzip program.
- no I can't reproduce the error with the same exact file that did
failed, that's what is really puzzling,

How do you make sure that no process is reading the file before it is 
fully flushed to disk?


Possible way of testing for this kind of error: before you open a file, 
use os.stat to determine its size, and write out the size and the file 
path into a log file. Whenever an error occurs, compare the actual size 
of the file with the logged value. If they are different, then you have 
tried to read from a file that was growing at that time.


Suggestion: from the other process, write the file into a different file 
(for example, "file.gz.tmp"). Once the file is flushed and closed, use 
os.rename() to give its final name. On POSIX systems, the rename() 
operation is atomic.




   there seems to be no clear pattern and just randmoly fails. The file
is also just open for read from this program,
   so in theory no way that it can be corrupted.
Yes, there is. Gzip stores CRC for compressed *blocks*. So if the file 
is not flushed to the disk, then you can only read a fragment of the 
block, and that changes the CRC.


   I also checked with lsof if there are processes that opened it but
nothing appears..
lsof doesn't work very well over nfs. You can have other processes on 
different computers (!) writting the file. lsof only lists the processes 
on the system it is executed on.


- can't really try on the local disk, might take ages unfortunately
(we are rewriting this system from scratch anyway)



--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
>>there seems to be no clear pattern and just randmoly fails. The file
>> is also just open for read from this program,
>>so in theory no way that it can be corrupted.
>
> Yes, there is. Gzip stores CRC for compressed *blocks*. So if the file is
> not flushed to the disk, then you can only read a fragment of the block, and
> that changes the CRC.
>
>>
>>I also checked with lsof if there are processes that opened it but
>> nothing appears..
>
> lsof doesn't work very well over nfs. You can have other processes on
> different computers (!) writting the file. lsof only lists the processes on
> the system it is executed on.
>
>>
>> - can't really try on the local disk, might take ages unfortunately
>> (we are rewriting this system from scratch anyway)
>>
>


Thanks a lotl, someone that writes on the file while reading might be
an explanation, the problem is that everyone claims that they are only
reading the file.

Apparently this file is generated once and a long time after only read
by two different tools (in sequence), so this could not be possible
either in theory.. I'll try to investigate more in this sense since
it's the only reasonable explation I've found so far.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy




Thanks a lotl, someone that writes on the file while reading might be
an explanation, the problem is that everyone claims that they are only
reading the file.
If that is true, then make that file system read only. Soon it will turn 
out who is writing them. ;-)


Apparently this file is generated once and a long time after only read
by two different tools (in sequence), so this could not be possible
either in theory.. I'll try to investigate more in this sense since
it's the only reasonable explation I've found so far.

Safe solution would be to develop a system where files go through 
"states" in a predefined order:


* allow programs to write into files with .incomplete extension.
* allow them to rename the file to .complete.
* create a single program that renames .complete files to .gz files 
AFTER making them read-only for everybody else.

* readers should only read .gz file
* .gz files are then guaranteed to be complete.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Grant Edwards
On 2012-08-01, Laszlo Nagy  wrote:
>>
>> As I wrote "I found many nice things (Pipe, Manager and so on), but
>> actually even
>> this seems to work:" yes I did read the documentation.
> Sorry, I did not want be offensive.
>>
>> I was just surprised that it worked better than I expected even
>> without Pipes and Queues, but now I understand why..
>>
>> Anyway now I would like to be able to detach subprocesses to avoid the
>> nasty code reloading that I was talking about in another thread, but
>> things get more tricky, because I can't use queues and pipes to
>> communicate with a running process that it's noit my child, correct?
>>
> Yes, I think that is correct.

I don't understand why detaching a child process on Linux/Unix would
make IPC stop working.  Can somebody explain?

-- 
Grant Edwards   grant.b.edwardsYow! My vaseline is
  at   RUNNING...
  gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy



things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?


Yes, I think that is correct.

I don't understand why detaching a child process on Linux/Unix would
make IPC stop working.  Can somebody explain?

It is implemented with shared memory. I think (although I'm not 100% 
sure) that shared memory is created *and freed up* (shm_unlink() system 
call) by the parent process. It makes sense, because the child processes 
will surely die with the parent. If you detach a child process, then it 
won't be killed with its original parent. But the shared memory will be 
freed by the original parent process anyway. I suspect that the child 
that has mapped that shared memory segment will try to access a freed up 
resource, do a segfault or something similar.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy



Yes, I think that is correct.

I don't understand why detaching a child process on Linux/Unix would
make IPC stop working.  Can somebody explain?

It is implemented with shared memory. I think (although I'm not 100% 
sure) that shared memory is created *and freed up* (shm_unlink() 
system call) by the parent process. It makes sense, because the child 
processes will surely die with the parent. If you detach a child 
process, then it won't be killed with its original parent. But the 
shared memory will be freed by the original parent process anyway. I 
suspect that the child that has mapped that shared memory segment will 
try to access a freed up resource, do a segfault or something similar.
So detaching the child process will not make IPC stop working. But 
exiting from the original parent process will. (And why else would you 
detach the child?)


--
http://mail.python.org/mailman/listinfo/python-list


xlrd 0.8.0 released!

2012-08-01 Thread Chris Withers

Hi All,

I'm pleased to announce the release of xlrd 0.8.0:

http://pypi.python.org/pypi/xlrd/0.8.0

This release finally lands the support for both .xls and .xlsx files.
Many thanks to John Machin for all his work on making this happen.
Opening of .xlsx files is seamless, just use xlrd as you did before and 
it all should "just work".


xlrd 0.8.0 is also the first release that that targets Python 2.6 and 
2.7, but no Python 3 just yet. Python 2.5 and below may work but are not 
supported. If you need to use Python 2.5 or earlier, please stick to 
xlrd 0.7.x.


Speaking of xlrd 0.7.x, that's now in "requested maintenance only" mode 
;-) That means, if possible, use 0.8.x. If you have a really good reason 
for sticking with 0.7.x, and you find a bug that you can't work around, 
then please make this clear on the [email protected] and 
we'll see what we can do.


If you find any problems, please ask about them on the list, or submit 
an issue on GitHub:


https://github.com/python-excel/xlrd/issues

Full details of all things Python and Excel related can be found here:

http://www.python-excel.org/

cheers,

Chris

--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
>
> So detaching the child process will not make IPC stop working. But exiting
> from the original parent process will. (And why else would you detach the
> child?)
>
> --
> http://mail.python.org/mailman/listinfo/python-list


Well it makes perfect sense if it stops working to me, so or
- I use zeromq or something similar to communicate
- I make every process independent without the need to further
communicate with the parent..
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: why the different output in Eclipse and Python Shell?

2012-08-01 Thread Prasad, Ramit
> > my code in Eclipse:
> >
> > dict.fromkeys(['China','America'])
> > print "dict is",dict
> >
> > output: dict is 
> >
> > my code in Python Shell:
> >
> > dict.fromkeys(['China','America'])
> >
> > output:{'America': None, 'China': None}
> >
> > Output in Python Shell is what i wanna,but why not in Eclipse?
> >
> >
> 
> The Python Shell is an interactive debugger, and prints the repr() of
> expressions that you don't assign anywhere.  I don't know Eclipse, but I
> suspect what you want to do is something like:
> 
> print "dict is", repr(dict)

I think you mean
print "dict is", repr(dict.fromkeys(['China','America']))

Otherwise you are just printing the repr of the dict type
and not the dictionary created. I would really store the output and
then print it.

d = dict.fromkeys(['China','America'])
print "dict is", d

Ramit

This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.  
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: EXTERNAL: Re: missing python-config and building python on Windows

2012-08-01 Thread Prasad, Ramit
> On 7/31/2012 11:49 PM, Mark Hammond wrote:
> > On 1/08/2012 10:48 AM, Damon Register wrote:
> >> 1. though I have looked in a few readme files, I don't see instructions for
> >> installing what I have just built using MSVC.  Where can I find the
> >> instructions for installing after building with MSVC?
> >
> > There is no such process.  In general, you can just run directly from the
> built tree.
> That is a bummer.  That makes me more curious about how the Windows
> installer was made and how all the pieces were gathered together.
> 
> > I'm afraid I don't know what python-config is.  It appears it might be a
> reflection of how Python
> > was configured and build on *nix systems - if that is the case then it is
> expected that one does not
> > exist for Windows (as it doesn't use the *nix build chain).
> which means, I guess, that mingw is barely supported if at all.
> While it may be Windows, mingw/msys is a nice way to build many
> programs that are unix oriented.  I suppose that just for fun I
> should try to build python on SuSE to see how it goes.
> 
> >> 3. It seems that MSVC doesn't produce the .a library files needed for
> >> linking
> >> into a mingw built program.  Do I have to do that fun trick to
> >> create the
> >> .a from the dll?
> >
> > I'm surprised MSVC *can* build .a files for mingw - but AFAIK, even if MSVC
> could do that, I believe
> > Python makes no attempt to build with support for linking into mingw
> programs.
> I don't know that MSVC can do this.  The only process of which I am aware is a
> two step process using pexports and dlltool to generate the .a file from a
> dll.
> One reason I was using the python.org installer is that it already had the
> python27.a file.  Now I am even more curious about what was used to build
> python
> and create that installer.
> 
> The python.org installer provided all I needed for build most python dependent
> apps with mingw until I ran into one that needed python-config.  I suppose
> that
> if python-config does what I suspect it does (produce cflags and ldflags as
> does pkg-config) then perhaps I could just fake it by replacing use of
> python-config with what the cflags and ldflags should be for where I have
> python.

I have no knowledge about building Python but does this help? 
http://wiki.python.org/moin/Building%20Python%20with%20the%20free%20MS%20C%20Toolkit
 

Ramit

This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Steven D'Aprano
On Wed, 01 Aug 2012 14:01:45 +0100, andrea crotti wrote:

> Full traceback:
> 
> Exception in thread Thread-8:

"DANGER DANGER DANGER WILL ROBINSON!!!"

Why didn't you say that there were threads involved? That puts a 
completely different perspective on the problem.

I *was* going to write back and say that you probably had either file 
system corruption, or network errors. But now that I can see that you 
have threads, I will revise that and say that you probably have a bug in 
your thread handling code.

I must say, Andrea, your initial post asking for help was EXTREMELY 
misleading. You over-simplified the problem to the point that it no 
longer has any connection to the reality of the code you are running. 
Please don't send us on wild goose chases after bugs in code that you 
aren't actually running.


>   there seems to be no clear pattern and just randmoly fails.

When you start using threads, you have to expect these sorts of 
intermittent bugs unless you are very careful.

My guess is that you have a bug where two threads read from the same file 
at the same time. Since each read shares state (the position of the file 
pointer), you're going to get corruption. Because it depends on timing 
details of which threads do what at exactly which microsecond, the effect 
might as well be random.

Example: suppose the file contains three blocks A B and C, and a 
checksum. Thread 8 starts reading the file, and gets block A and B. Then 
thread 2 starts reading it as well, and gets half of block C. Thread 8 
gets the rest of block C, calculates the checksum, and it doesn't match.

I recommend that you run a file system check on the remote disk. If it 
passes, you can eliminate file system corruption. Also, run some network 
diagnostics, to eliminate corruption introduced in the network layer. But 
I expect that you won't find anything there, and the problem is a simple 
thread bug. Simple, but really, really hard to find.

Good luck.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NameError vs AttributeError

2012-08-01 Thread Ethan Furman

Terry Reedy wrote:

On 7/31/2012 4:49 PM, Chris Kaynor wrote:

On Tue, Jul 31, 2012 at 1:21 PM, Terry Reedy wrote:

Another example: KeyError and IndexError are both subscript errors,
but there is no SubscriptError superclass, even though both work
thru the same mechanism -- __getitem__.  The reason is that there is
no need for one. In 'x[y]', x is usually intented to be either a
sequence or mapping, but not possibly both. In the rare cases when
one wants to catch both errors, one can easily enough. To continue
the example above, popping an empty list and empty set produce
IndexError and KeyError respectively:

   try:
 while True:
   process(pop())
   except (KeyError, IndexError):
 pass  # empty collection means we are done


There is a base type for KeyError and IndexError: LookupError.

http://docs.python.org/library/exceptions.html#exception-hierarchy


Oh, so there is. Added in 1.5 strictly as a never-directly-raised base 
class for the above pair, now also directly raised in codecs.lookup. I 
have not decided if I want to replace the tuple in the code in my book.


I think I'd stick with the tuple -- LookupError could just as easily 
encompass NameError and AttributeError.

--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
2012/8/1 Steven D'Aprano :
> On Wed, 01 Aug 2012 14:01:45 +0100, andrea crotti wrote:
>
>> Full traceback:
>>
>> Exception in thread Thread-8:
>
> "DANGER DANGER DANGER WILL ROBINSON!!!"
>
> Why didn't you say that there were threads involved? That puts a
> completely different perspective on the problem.
>
> I *was* going to write back and say that you probably had either file
> system corruption, or network errors. But now that I can see that you
> have threads, I will revise that and say that you probably have a bug in
> your thread handling code.
>
> I must say, Andrea, your initial post asking for help was EXTREMELY
> misleading. You over-simplified the problem to the point that it no
> longer has any connection to the reality of the code you are running.
> Please don't send us on wild goose chases after bugs in code that you
> aren't actually running.
>
>
>>   there seems to be no clear pattern and just randmoly fails.
>
> When you start using threads, you have to expect these sorts of
> intermittent bugs unless you are very careful.
>
> My guess is that you have a bug where two threads read from the same file
> at the same time. Since each read shares state (the position of the file
> pointer), you're going to get corruption. Because it depends on timing
> details of which threads do what at exactly which microsecond, the effect
> might as well be random.
>
> Example: suppose the file contains three blocks A B and C, and a
> checksum. Thread 8 starts reading the file, and gets block A and B. Then
> thread 2 starts reading it as well, and gets half of block C. Thread 8
> gets the rest of block C, calculates the checksum, and it doesn't match.
>
> I recommend that you run a file system check on the remote disk. If it
> passes, you can eliminate file system corruption. Also, run some network
> diagnostics, to eliminate corruption introduced in the network layer. But
> I expect that you won't find anything there, and the problem is a simple
> thread bug. Simple, but really, really hard to find.
>
> Good luck.
>

Thanks a lot, that makes a lot of sense..  I haven't given this detail
before because I didn't write this code, and I forgot that there were
threads involved completely, I'm just trying to help to fix this bug.

Your explanation makes a lot of sense, but it's still surprising that
even just reading files without ever writing them can cause troubles
using threads :/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [pyxl] xlrd 0.8.0 released!

2012-08-01 Thread Matthew Smith
Thank you guys so much! I am so excited to finally have xlsx so my users
don't have extra steps!

On Wed, Aug 1, 2012 at 11:01 AM, Chris Withers wrote:

> Hi All,
>
> I'm pleased to announce the release of xlrd 0.8.0:
>
> http://pypi.python.org/pypi/**xlrd/0.8.0
>
> This release finally lands the support for both .xls and .xlsx files.
> Many thanks to John Machin for all his work on making this happen.
> Opening of .xlsx files is seamless, just use xlrd as you did before and it
> all should "just work".
>
> xlrd 0.8.0 is also the first release that that targets Python 2.6 and 2.7,
> but no Python 3 just yet. Python 2.5 and below may work but are not
> supported. If you need to use Python 2.5 or earlier, please stick to xlrd
> 0.7.x.
>
> Speaking of xlrd 0.7.x, that's now in "requested maintenance only" mode
> ;-) That means, if possible, use 0.8.x. If you have a really good reason
> for sticking with 0.7.x, and you find a bug that you can't work around,
> then please make this clear on the [email protected] and
> we'll see what we can do.
>
> If you find any problems, please ask about them on the list, or submit an
> issue on GitHub:
>
> https://github.com/python-**excel/xlrd/issues
>
> Full details of all things Python and Excel related can be found here:
>
> http://www.python-excel.org/
>
> cheers,
>
> Chris
>
> --
> Simplistix - Content Management, Batch Processing & Python Consulting
> - http://www.simplistix.co.uk
>
> --
> You received this message because you are subscribed to the Google Groups
> "python-excel" group.
> To post to this group, send an email to [email protected].
> To unsubscribe from this group, send email to python-excel+unsubscribe@**
> googlegroups.com .
> For more options, visit this group at http://groups.google.com/**
> group/python-excel?hl=en-GB
> .
>
>


-- 
Matthew Smith

Software Engineer at G2, Inc
Follow us on Twitter @G2_inc
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NameError vs AttributeError

2012-08-01 Thread Terry Reedy

On 8/1/2012 11:53 AM, Ethan Furman wrote:

Terry Reedy wrote:

On 7/31/2012 4:49 PM, Chris Kaynor wrote:

On Tue, Jul 31, 2012 at 1:21 PM, Terry Reedy wrote:



one wants to catch both errors, one can easily enough. To continue
the example above, popping an empty list and empty set produce
IndexError and KeyError respectively:

   try:
 while True:
   process(pop())
   except (KeyError, IndexError):
 pass  # empty collection means we are done


There is a base type for KeyError and IndexError: LookupError.

http://docs.python.org/library/exceptions.html#exception-hierarchy


Oh, so there is. Added in 1.5 strictly as a never-directly-raised base
class for the above pair, now also directly raised in codecs.lookup. I
have not decided if I want to replace the tuple in the code in my book.


I think I'd stick with the tuple -- LookupError could just as easily
encompass NameError and AttributeError.


Thank you. Having to remember exactly which lookup error is encompassed 
by LookupError illustrates my point about the cost of adding entities 
without necessity. It also illustrates the importance of carefull 
naming. SubscriptError might have been better.


--
Terry Jan Reedy



--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy



Thanks a lot, that makes a lot of sense..  I haven't given this detail
before because I didn't write this code, and I forgot that there were
threads involved completely, I'm just trying to help to fix this bug.

Your explanation makes a lot of sense, but it's still surprising that
even just reading files without ever writing them can cause troubles
using threads :/
Make sure that file objects are not shared between threads. If that is 
possible. It will probably solve the problem (if that is related to 
threads).

--
http://mail.python.org/mailman/listinfo/python-list


Re: EXTERNAL: Re: missing python-config and building python on Windows

2012-08-01 Thread Terry Reedy

On 8/1/2012 7:47 AM, Damon Register wrote:

On 7/31/2012 11:49 PM, Mark Hammond wrote:

On 1/08/2012 10:48 AM, Damon Register wrote:

1. though I have looked in a few readme files, I don't see
instructions for
installing what I have just built using MSVC.  Where can I find the
instructions for installing after building with MSVC?


There is no such process.  In general, you can just run directly from
the built tree.

That is a bummer.  That makes me more curious about how the Windows
installer was made and how all the pieces were gathered together.


All I know is that the Windows installer is a .msi file created by a 
script that uses msilib. I don't know whether that script is in the 
repository.


--
Terry Jan Reedy



--
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread andrea crotti
2012/8/1 Laszlo Nagy :
>
>> Thanks a lot, that makes a lot of sense..  I haven't given this detail
>> before because I didn't write this code, and I forgot that there were
>> threads involved completely, I'm just trying to help to fix this bug.
>>
>> Your explanation makes a lot of sense, but it's still surprising that
>> even just reading files without ever writing them can cause troubles
>> using threads :/
>
> Make sure that file objects are not shared between threads. If that is
> possible. It will probably solve the problem (if that is related to
> threads).


Well I just have to create a lock I guess right?
with lock:
# open file
# read content
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CRC-checksum failed in gzip

2012-08-01 Thread Laszlo Nagy



Make sure that file objects are not shared between threads. If that is
possible. It will probably solve the problem (if that is related to
threads).


Well I just have to create a lock I guess right?
That is also a solution. You need to call file.read() inside an acquired 
lock.

with lock:
 # open file
 # read content

But not that way! Your example will keep the lock acquired for the 
lifetime of the file, so it cannot be shared between threads.


More likely:

## Open file
lock = threading.Lock()
fin = gzip.open(file_path...)
# Now you can share the file object between threads.

# and do this inside any thread:
## data needed. block until the file object becomes usable.
with lock:
data = fin.read() # other threads are blocked while I'm reading
## use your data here, meanwhile other threads can read


--
http://mail.python.org/mailman/listinfo/python-list


Re: why the different output in Eclipse and Python Shell?

2012-08-01 Thread Dave Angel
On 08/01/2012 11:26 AM, Prasad, Ramit wrote:
>>> my code in Eclipse:
>>>
>>> dict.fromkeys(['China','America'])
>>> print "dict is",dict
>>>
>>> output: dict is 
>>>
>>> my code in Python Shell:
>>>
>>> dict.fromkeys(['China','America'])
>>>
>>> output:{'America': None, 'China': None}
>>>
>>> Output in Python Shell is what i wanna,but why not in Eclipse?
>>>
>>>
>> The Python Shell is an interactive debugger, and prints the repr() of
>> expressions that you don't assign anywhere.  I don't know Eclipse, but I
>> suspect what you want to do is something like:
>>
>> print "dict is", repr(dict)
> I think you mean
> print "dict is", repr(dict.fromkeys(['China','America']))
>
> Otherwise you are just printing the repr of the dict type
> and not the dictionary created. I would really store the output and
> then print it.
>
> d = dict.fromkeys(['China','America'])
> print "dict is", d
>
> Ramit
>
> This email is confidential and subject to important disclaimers and
> conditions including on offers for the purchase or sale of
> securities, accuracy and completeness of information, viruses,
> confidentiality, legal privilege, and legal entity disclaimers,
> available at http://www.jpmorgan.com/pages/disclosures/email.  
Absolutely right.  I meant to refer to the name bound to the dict, not
the dict class itself.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Roy Smith
On Aug 1, 2012, at 9:25 AM, andrea crotti wrote:

> [beanstalk] does look nice and I would like to have something like that..
> But since I have to convince my boss of another external dependency I
> think it might be worth
> to try out zeromq instead, which can also do similar things and looks
> more powerful, what do you think?

I'm afraid I have no experience with zeromq, so I can't offer an opinion.

--
Roy Smith
[email protected]



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Grant Edwards
On 2012-08-01, Laszlo Nagy  wrote:
>
 things get more tricky, because I can't use queues and pipes to
 communicate with a running process that it's noit my child, correct?

>>> Yes, I think that is correct.
>> I don't understand why detaching a child process on Linux/Unix would
>> make IPC stop working.  Can somebody explain?
>
> It is implemented with shared memory. I think (although I'm not 100% 
> sure) that shared memory is created *and freed up* (shm_unlink() system 
> call) by the parent process. It makes sense, because the child processes 
> will surely die with the parent. If you detach a child process, then it 
> won't be killed with its original parent. But the shared memory will be 
> freed by the original parent process anyway. I suspect that the child 
> that has mapped that shared memory segment will try to access a freed up 
> resource, do a segfault or something similar.

I still don't get it.  shm_unlink() works the same way unlink() does.
The resource itself doesn't cease to exist until all open file handles
are closed. From the shm_unlink() man page on Linux:

   The operation of shm_unlink() is analogous to unlink(2): it
   removes a shared memory object name, and, once all processes
   have unmapped the object, de-allocates and destroys the
   contents of the associated memory region. After a successful
   shm_unlink(), attempts to shm_open() an object with the same
   name will fail (unless O_CREAT was specified, in which case a
   new, distinct object is created).
   
Even if the parent calls shm_unlink(), the shared-memory resource will
continue to exist (and be usable) until all processes that are holding
open file handles unmap/close them.  So not only will detached
children not crash, they'll still be able to use the shared memory
objects to talk to each other.
   
-- 
Grant Edwards   grant.b.edwardsYow! Why is it that when
  at   you DIE, you can't take
  gmail.comyour HOME ENTERTAINMENT
   CENTER with you??
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Pythonmac-SIG] Py2app error

2012-08-01 Thread Mark Livingstone
Hi Guys,

OK, taking Chris' advice, I installed on a Snow Leopard machine:

cheyenne:dist marklivingstone$ ls ~/Downloads/
About Downloads.lpdf
numpy-1.6.2-py2.7-python.org-macosx10.3.dmg wxMac-2.8.12.tar
matplotlib-1.1.0-py2.7-python.org-macosx10.3.dmg
python-2.7.3-macosx10.6.dmg 
wxPython2.8-osx-docs-demos-2.8.12.1-universal-py2.7.dmg
mercurial-2.2.3_20120707-py2.7-macosx10.7   
scipy-0.11.0rc1-py2.7-python.org-macosx10.6.dmg 
wxPython2.8-osx-unicode-2.8.12.1-universal-py2.7.dmg

Then I went to Ronald's Bitbucket and built a current py2app setup.

I tried a build but got this:

 python ../mac-setup/setup_py2app.py py2app
Traceback (most recent call last):
  File "../mac-setup/setup_py2app.py", line 1, in 
import wx
  File 
"/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode/wx/__init__.py",
line 45, in 
from wx._core import *
  File 
"/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode/wx/_core.py",
line 4, in 
import _core_
ImportError: 
dlopen(/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode/wx/_core_.so,
2): no suitable image found.  Did find:

/usr/local/lib/wxPython-unicode-2.8.12.1/lib/python2.7/site-packages/wx-2.8-mac-unicode/wx/_core_.so:
no matching architecture in universal wrapper
cheyenne:src marklivingstone$

I then did "python-32 ../mac-setup/setup_py2app.py py2app" , and built
using py2app my Salstat.app which built fine and created a
dist/Salstat.app

However, when I try to run the app, I get the following in the console log:


2/08/12 12:06:23.340 PM [0x0-0x421421].com.SalStat.SalStat:
/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/salstat.py:585:
SyntaxWarning: import * only allowed at module level
2/08/12 12:06:23.372 PM [0x0-0x421421].com.SalStat.SalStat:
argvemulator warning: fetching events failed
2/08/12 12:06:23.372 PM [0x0-0x421421].com.SalStat.SalStat: Traceback
(most recent call last):
2/08/12 12:06:23.372 PM [0x0-0x421421].com.SalStat.SalStat:   File
"/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/__boot__.py",
line 319, in 
2/08/12 12:06:23.373 PM [0x0-0x421421].com.SalStat.SalStat:
_run('salstat.py')
2/08/12 12:06:23.373 PM [0x0-0x421421].com.SalStat.SalStat:   File
"/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/__boot__.py",
line 311, in _run
2/08/12 12:06:23.374 PM [0x0-0x421421].com.SalStat.SalStat:
exec(compile(source, path, 'exec'), globals(), globals())
2/08/12 12:06:23.374 PM [0x0-0x421421].com.SalStat.SalStat:   File
"/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/salstat.py",
line 9, in 
2/08/12 12:06:23.374 PM [0x0-0x421421].com.SalStat.SalStat: import wx
2/08/12 12:06:23.374 PM [0x0-0x421421].com.SalStat.SalStat:   File
"wx/__init__.pyc", line 45, in 
2/08/12 12:06:23.374 PM [0x0-0x421421].com.SalStat.SalStat:   File
"wx/_core.pyc", line 4, in 
2/08/12 12:06:23.375 PM [0x0-0x421421].com.SalStat.SalStat:   File
"wx/_core_.pyc", line 18, in 
2/08/12 12:06:23.375 PM [0x0-0x421421].com.SalStat.SalStat:   File
"wx/_core_.pyc", line 11, in __load
2/08/12 12:06:23.375 PM [0x0-0x421421].com.SalStat.SalStat:
ImportError: 
dlopen(/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/lib/python2.7/lib-dynload/wx/_core_.so,
2): no suitable image found.  Did find:
2/08/12 12:06:23.375 PM [0x0-0x421421].com.SalStat.SalStat:

/Users/marklivingstone/Documents/workspace/salstat-statistics-package-2/src/dist/SalStat.app/Contents/Resources/lib/python2.7/lib-dynload/wx/_core_.so:
no matching architecture in universal wrapper
2/08/12 12:06:23.479 PM SalStat: SalStat Error
2/08/12 12:06:25.919 PM com.apple.launchd.peruser.501:
([0x0-0x421421].com.SalStat.SalStat[76293]) Exited with code: 255

Is it just not possible to use py2app to create an app for the Mac based on wx?

Also, the final result turns in at 184MB in size. Are there any common
things that may be being pulled in that I should exclude?

Thanks in advance,

MarkL
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python a commercial proposition ?

2012-08-01 Thread David
On 01/08/2012, Stefan Behnel  wrote:
>
> Would you mind taking this slightly off-topic discussion off the list?

I always strive to stay on-topic. In fact immediately this thread went
off topic, 4 messages back, I did try to go off list, but got this
result from the OP:

Delivery to the following recipient failed permanently:
 [email protected]
Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the
recipient domain. We recommend contacting the other email provider for
further information about the cause of this error. The error that the
other server returned was: 554 554 delivery error: dd This user
doesn't have a yahoo.co.uk account ([email protected]) [-5] -
mta1050.mail.ukl.yahoo.com (state 17).
Date: Wed, 1 Aug 2012 09:31:43 +1000
Subject: Re: Is Python a commercial proposition ?
From: David 
To: lipska the kat 

Then, if someone is going to call me an ignoramus on a public list,
they will receive a response in the same forum.

So, I apologise to the list, but please note the unusual circumstances. Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pass data to a subprocess

2012-08-01 Thread Laszlo Nagy



I still don't get it.  shm_unlink() works the same way unlink() does.
The resource itself doesn't cease to exist until all open file handles
are closed. From the shm_unlink() man page on Linux:

The operation of shm_unlink() is analogous to unlink(2): it
removes a shared memory object name, and, once all processes
have unmapped the object, de-allocates and destroys the
contents of the associated memory region. After a successful
shm_unlink(), attempts to shm_open() an object with the same
name will fail (unless O_CREAT was specified, in which case a
new, distinct object is created).

Even if the parent calls shm_unlink(), the shared-memory resource will

continue to exist (and be usable) until all processes that are holding
open file handles unmap/close them.  So not only will detached
children not crash, they'll still be able to use the shared memory
objects to talk to each other.

I stand corrected. It should still be examined, what kind shared memory 
is used under non-linux systems. System V on AIX? And what about 
Windows? So maybe the general answer is still no. But I guess that the 
OP wanted this to work on a specific system.


Dear Andrea Crotti! Please try to detach two child processes, exit from 
the main process, and communicate over a multiprocessing queue. It will 
possibly work. Sorry for my bad advice.

--
http://mail.python.org/mailman/listinfo/python-list