OSSCamp Chandigarh April 2010 - Open Source Is The Future

2010-03-02 Thread Rishabh Verma
Hello All,

OSSCamp is again being organized in Chandigarh on April 10, 2010. This
is another step ahead to foster the open source community in the city
beautiful. This event is a purely community organized event by some of
the open source evangelists of Chandigarh and it would be great if you
guys join us to make this event a great success. At a Camp, we love to
cross-talk, huddle together, raise some noise, celebrate technology,
argue over the coolest OS, fight on our fav programming languages,
discuss stuff, and what not!

OSScamp Chandigarh April 2010
April 10, 2009
Venue: To Be Decided
Time: 10AM - 6PM

You can register for the event at : http://chd.osscamp.in/

Follow Us on Twitter : http://twitter.com/osscamp

Facebook Event Page : http://www.facebook.com/event.php?eid=247304090115&ref=ts
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to find an COM object in using of pywin32

2010-03-02 Thread Alf P. Steinbach

* Steven Woody:

Hi,

I want to interactive with an OLE application with pywin32.  The
problem is I get totally no idea how to find the object in OLEView and
how to figure out it's interface.

With pywin32's example, I even don't understand that in the below statement,

  win32com.client.Dispatch('Excel.Application')

that where the name 'Excel.Application' comes from?  In OLEView
(Microsoft's COM brower), I cannot find this name.


It's a "programmatic identifier" a.k.a. "progid".

It identifies a COM class and it's used as a readable but more 
name-collision-prone alternative to the 128-bit UUID.


You can find the programmatic identifiers in the Windows registry (use e.g. 
regedit); often they're not documented.



Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-02 Thread Bruno Desthuilliers

John Posner a écrit :

On 3/1/2010 2:59 PM, Bruno Desthuilliers wrote:


Answer here:

http://groups.google.com/group/comp.lang.python/tree/browse_frm/thread/bd71264b6022765c/3a77541bf9d6617d#doc_89d608d0854dada0 



I really have to put this in the wiki :-/



Bruno, I performed a light copy-edit of your writeup and put in some 
reStructuredText (reST) markup. The result is at:


  http://cl1p.net/bruno_0301.rst/


Cool.



The only sentence that I think needs work is:

  Having access to itself (of course), the
  instance (if there's one) and the class, it's easy for it
  to wrap all this into a **method** object.

Maybe this?

  With the instance object (if any) and class object available,
  it's easy to create a method object that wraps the function object.


That's perfect.

But there's also a typo to fix in the Python implementation of the 
Method object: in the call method, it should inject self.im_self as 
first arg, not self.im_func. This had been spotted by someone named John 
Posner, IIRC !-)





Begging pardon for my presumptuousness,


Begging pardon for my laziness :-/
--
http://mail.python.org/mailman/listinfo/python-list


learn ajax

2010-03-02 Thread groups_ads12
ajax.learn.net.in 

afc ajax 

ajax 

ajax and php


ajax and php building responsive web applications


ajax application


ajax applications


ajax asp 

ajax c# 

ajax code


ajax company


ajax control


ajax control toolkit


ajax controls


ajax data grid


ajax design


ajax design patterns


ajax developer


ajax developers


ajax development


ajax drag and drop


ajax dropdown


ajax example


ajax examples


ajax expert


ajax fc 

ajax for web application developers


ajax form tutorial


ajax forms


ajax forum


ajax framework


ajax frameworks


ajax grid


ajax grid control


ajax http


ajax jsp 

ajax libraries


ajax library


ajax net 

ajax net tutorial


ajax photo


ajax photo gallery


ajax php 

ajax portal


ajax program


ajax programmer


ajax programmers


ajax programming


ajax rails


ajax request


ajax sample


ajax samples


ajax script


ajax scripts


ajax soccer


ajax soccer club


ajax software


ajax support


ajax technology


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Daniel Fetchinson
>> > But you are working on a solution in search of a problem.  The really
>> > smart thing to do would be pick something more useful to work on.  We
>> > don't need another configuration language.  I can't even say "yet
>> > another" because there's already a "yet another" called yaml.
>>
>> And in case you are new here let me assure you that Paul is saying
>> this with his full intention of being helpful to you. I also would
>> think that working on such a project might be fun and educational for
>> you but completely useless if you have users other than yourself in
>> mind. Again, I'm trying to be helpful here, so you can focus on a
>> project that is both fun/educational for you and also potentially
>> useful for others. This RSON business is not one of them.
>
> OK, but I am a bit unclear on what you and/or Paul are claiming.  It
> could be one of a number of things.  For example:
>
> - There is a preexisting file format suitable for my needs, so I
> should not invent another one.

I suspect this to be true, if we mean the same thing by "configuration
file format". Especially if RSON will be a superset of JSON.

> - If I invent a file format suitable for my needs, it couldn't
> possibly be general enough for anybody else.

Quite possibly, the reason is that the already existing file formats
have an ecosystem around them that make them attractive. Your file
format will have to cope with this barrier to attract new users which
I think will be very difficult, given the wide range of already
existing formats, covering just about any use case.

> - Even if it was general enough for somebody else, there would only be
> two of them.

See above.

> I've been known to waste time (or be accused of wasting time) on
> various endeavors, but I like to know exactly *why* it is perceived to
> be a waste.

Don't get me wrong, I also waste lot of time on hobby/fun/educational
projects ("waste" in this case is only meant as "useless for others",
not "useless for me") because it's, well, hobby and fun and
educational :) It's just good to know if a given project is in this
category or outside.

Cheers,
Daniel



-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Jean-Michel Pichavant

Roy Smith wrote:

>From inside a module, I want to add a key-value pair to the module's
__dict__.  I know I can just do:

FOO = 'bar'

at the module top-level, but I've got 'FOO' as a string and what I
really need to do is

__dict__['Foo'] = 'bar'

When I do that, I get "NameError: name '__dict__' is not defined".  Is
it possible to do what I'm trying to do?
  

test.py:

import sys
varName= 'foo'
setattr(sys.modules[__name__], varName, 42)



in a shell:
import test

print test.foo
>>> 42


JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: (and about tests) Re: Pedantic pickling error after reload?

2010-03-02 Thread Robert

well, reloading is the thing which I do most in coding practice :-)
For me its a basic thing like cell proliferation in biology.


I simply never do it. It has subtle issues, one of them you found,
others you say you work around by introducing actual frameworks. But you
might well forget some corner-cases & suddently chase a chimera you deem
a bug, that in fact is just an unwanted side-effect of reloading.


well, at dev time there is a different rule: the more bugs, the better

(they can raise/indicate certain design and coding weaknesses)



And all this extra complexity is only good for the process of actually
changing the code. It doesn't help you maintaining code quality.


neither does a improved editor, interactive/debugging/reload 
scheme replace tests, nor do tests replace them the other way 
around. different dimensions. just the benefactions on all levels 
radiate recursively of course ...
e.g. by a good reload scheme one can even work the tests out 
better (and debug more efficiently when the tests bump).


that little reload support code is a rather constant small stub 
compared to the app size (unless with trivial 1-day apps maybe).

Most 'utility' modules don't need extra care at all.
Plus maybe 1 .. 5 extra lines per few frequently changed GUI 
classes (when well organized) and some 5..10 lines for 
preserving/re-fixing the few application data anchors. Thats all. 
No need to get it fully consistent, as serves its purpose when 
editing during runtime is possible in 'most cases'. And most edit 
cases during debug sessions are typically just small fixes and 
touches of function code. One fix revealing the next follow-up bug 
.. beautifying things .. as it is.

critical data structure changes are very rare occasions.

A general reload scheme ("edit at runtime") zeroes out most 
effectively a core time consumer while exploring, iterating, 
debugging, smoothing ..
On the time scale of these tasks, this effect can in my opinion by 
far not be matched equivalently by setup code of whatever kind in 
non-trivial apps. (As I did/do this before or with less dynamic 
programming languages)



I typically need just 1 full app reboot on 20..50 edit-run-cycles I
guess. And just few unit test runs per release. Even for
Cython/pyximport things I added support for this reload edit-run-cycle,
because I cannot imagine to dev without this.


Let me assure you - it works :)

for example yesterday, I create a full CRUD-interface for a web-app
(which is the thing I work on mostly these days) without *once* taking a
look at the browser. I wrote actions, forms, HTML, and tests along,
developed the thing ready, asserted certain constraints and error-cases,
and once finished, fired up the browser - and he saw, it worked!

Yes, I could have written that code on the fly, hitting F5 every few
seconds/minutes to see if things work out (instead of just running the
specific tests through nose) - and once I'd be finished, I didn't have
anything permanent that ensured the functionality over time.


well in this 1-day example you coded a thing which obviously you 
had already done similarly several times. still I guess, you had 
some debug session too. some exploration of new things and new 
aspects. profiting e.g. particularly from the Python interactive / 
interactive debugger, post mortem etc. ..

unless you type so perfect from scratch as that guy in Genesis 1 :-)


this is a comfortable quasi religious theory raised often and easily
here and there - impracticable and very slow on that fine grained code
evolution level however. an interesting issue.


To me, that's as much as an religious statement often heard by people
that aren't (really) into test-driven development. By which I personally
don't mean the variant where one writes tests first, and then code. I
always develop both in lock-step, sometimes introducing a new feauter
first in my test as e.g. new arguments, or new calls, and then
implementing them, but as often the other way round.

The argument is always a variation of "my problem is to complicated, the
code-base to interviened to make it possible to test this".


well, nothing against preaching about tests ;-) , unless its too 
much.
like with every extreme there is also a threshold where you don't 
draw anymore at the bottom line by adding more tests. there are 
costs too. other bottle necks ...


its not against test writing for testing/validating/stabilizing 
and other indirect high-level benefactions. there are simply other 
dimensions too, which are worth a thought or two. the question 
about a good reload scheme is more oriented towards the 
debugging/interactive/exploration/editing level. things, where 
particularly Python opens new dimensions by its superior dynamic 
and self-introspective nature.



I call this a bluff. You might work with a code-base that makes it
harder than needed to write tests for new functionality. But then, most
of the time this is a sign of lack of design. Writing with testab

Call Signtool using python

2010-03-02 Thread enda man
Hi,

I want to call the Windows signtool to sign a binary from a python
script.

Here is my script:
//
os.chdir('./Install/activex/cab')
subprocess.call(["signtool", "sign", "/v", "/f", "webph.pfx", "/t",
"http://timestamp.verisign.com/scripts/timstamp.dll";, "WebPh.exe" ])
//

But I am getting this error:

SignTool Error: The specified PFX password is not correct.

Number of files successfully Signed: 0
Number of warnings: 0
Number of errors: 1
Finished building plugin installer
scons: done building targets.



This python script is called as part of a scons build, which is also
python code.

Anyone seen this before or can pass on any ideas.
Tks,
EM


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Verifying My Troublesome Linkage Claim between Python and Win7

2010-03-02 Thread alex23
"W. eWatson"  wrote:
> My claim is that if one creates a program in a folder that reads a file
> in the folder it and then copies it to another folder, it will read  the
> data file in the first folder, and not a changed file in the new folder.
> I'd appreciate it if some w7 users could try this, and let me know what
> they find.

On a fresh install of Win7 Ultimate, I created your program & the text
file in one folder, then copied the program both using ctrl-c/ctrl-v
and later ctrl-drag-&-drop. In both cases, the copied program *did
not* refer to the text file when executed:

D:\projects>a
Traceback (most recent call last):
  File "D:\projects\a.py", line 1, in 
track_file = open("verify.txt")
IOError: [Errno 2] No such file or directory: 'verify.txt'

Whatever you seem to think you did, you didn't, or you're not
providing enough detail on what you did to repeat the behaviour.

I do agree with the sentiment that this isn't a Python issue.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to use python to register a service (an existing .exe file)

2010-03-02 Thread coldpizza
instsrv.exe does not come with Windows by default, but I guess it
should be possible to add a service using the win32 built-in `sc`
command line tool.

Try `sc create` from a console.

The app you want to install as a service will still have to be
compliant with the win32 service interface, otherwise it will throw an
error, although the app will still be started.

On Feb 16, 2:10 am, News123  wrote:
> Hi,
>
> Is there a python way to register new windows services.
>
> I am aware of the
> instsrv.exe program, which can be used to install services.
> I could use subprocess.Popen to call
>
> instsrv.exe "service_name" program.exe
>
> but wondered, whether there's already an existing function.
>
> Thans in advance and bye
>
> N

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Roy Smith
In article ,
 Chris Rebert  wrote:

> On Mon, Mar 1, 2010 at 8:27 PM, Roy Smith  wrote:
> > >From inside a module, I want to add a key-value pair to the module's
> > __dict__.  I know I can just do:
> >
> > FOO = 'bar'
> >
> > at the module top-level, but I've got 'FOO' as a string and what I
> > really need to do is
> >
> > __dict__['Foo'] = 'bar'
> >
> > When I do that, I get "NameError: name '__dict__' is not defined".  Is
> > it possible to do what I'm trying to do?
> 
> Yes; just modify the dict returned by the globals() built-in function
> instead.

Ah, cool.  Thanks.

> It's usually not wise to do this and is better to use a
> separate dict instead, but I'll assume you know what you're doing and
> have good reasons to disregard the standard advice due to your
> use-case.

Why is it unwise?

The use case is I'm importing a bunch of #define constants from a C header 
file.  I've got triples that I want to associate; the constant name, the 
value, and a string describing it.  The idea is I want to put in the 
beginning of the module:

declare('XYZ_FOO', 0, "The foo property")
declare('XYZ_BAR', 1, "The bar property")
declare('XYZ_BAZ', 2, "reserved for future use")

and so on.  I'm going to have hundreds of these, so ease of use, ease of 
maintenance, and niceness of presentation are important.

My declare() function will not just set XYZ_FOO = 1 at module global scope, 
but also insert entries in a variety of dicts so I can look up the 
description string, map from a value back to the constant name, etc.

I *could* do this in a separate dict, but the notational convenience of 
being able to have the original constant names globally available is pretty 
important.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Steve Holden
Roy Smith wrote:
> In article ,
>  Chris Rebert  wrote:
> 
>> On Mon, Mar 1, 2010 at 8:27 PM, Roy Smith  wrote:
>>> >From inside a module, I want to add a key-value pair to the module's
>>> __dict__. Â I know I can just do:
>>>
>>> FOO = 'bar'
>>>
>>> at the module top-level, but I've got 'FOO' as a string and what I
>>> really need to do is
>>>
>>> __dict__['Foo'] = 'bar'
>>>
>>> When I do that, I get "NameError: name '__dict__' is not defined". Â Is
>>> it possible to do what I'm trying to do?
>> Yes; just modify the dict returned by the globals() built-in function
>> instead.
> 
> Ah, cool.  Thanks.
> 
>> It's usually not wise to do this and is better to use a
>> separate dict instead, but I'll assume you know what you're doing and
>> have good reasons to disregard the standard advice due to your
>> use-case.
> 
> Why is it unwise?
> 
> The use case is I'm importing a bunch of #define constants from a C header 
> file.  I've got triples that I want to associate; the constant name, the 
> value, and a string describing it.  The idea is I want to put in the 
> beginning of the module:
> 
> declare('XYZ_FOO', 0, "The foo property")
> declare('XYZ_BAR', 1, "The bar property")
> declare('XYZ_BAZ', 2, "reserved for future use")
> 
> and so on.  I'm going to have hundreds of these, so ease of use, ease of 
> maintenance, and niceness of presentation are important.
> 
> My declare() function will not just set XYZ_FOO = 1 at module global scope, 
> but also insert entries in a variety of dicts so I can look up the 
> description string, map from a value back to the constant name, etc.
> 
> I *could* do this in a separate dict, but the notational convenience of 
> being able to have the original constant names globally available is pretty 
> important.
> 
And how important is it to make sure that whatever data your program
processes doesn't overwrite the actual variable names you want to use to
program the processing?

If you use this technique you are effectively making your program a
hostage to fortune, as you no longer control the namespace you are using
for the programming.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-02 Thread John Posner

On 3/2/2010 3:57 AM, Bruno Desthuilliers wrote:


With the instance object (if any) and class object available,
it's easy to create a method object that wraps the function object.


That's perfect.



Fixed.


But there's also a typo to fix in the Python implementation of the
Method object: in the call method, it should inject self.im_self as
first arg, not self.im_func. This had been spotted by someone named John
Posner, IIRC !-)



Fixed (oops!).

I've updated the text at this location:

>   http://cl1p.net/bruno_0301.rst/

I think the ball is back in your court, Bruno. I'd be happy to help more 
-- feel free to contact me off-list, at [email protected].


Best,
John

--
http://mail.python.org/mailman/listinfo/python-list


os.fdopen() issue in Python 3.1?

2010-03-02 Thread Albert Hopkins
I have a snippet of code that looks like this:

pid, fd = os.forkpty()
if pid == 0:
subprocess.call(args)
else:
input = os.fdopen(fd).read()
...


This seems to work find for CPython 2.5 and 2.6 on my Linux system.
However, with CPython 3.1 I get:

input = os.fdopen(fd).read()
IOError: [Errno 5] Input/output error

Is there something wrong in Python 3.1?  Is this the correct way to do
this (run a process in a pseudo-tty and read it's output) or is there
another way I should/could be doing this?

-a


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Mel
Roy Smith wrote:
[ ... ]
> Why is it unwise?
> 
> The use case is I'm importing a bunch of #define constants from a C header
> file.  I've got triples that I want to associate; the constant name, the
> value, and a string describing it.  The idea is I want to put in the
> beginning of the module:
> 
> declare('XYZ_FOO', 0, "The foo property")
> declare('XYZ_BAR', 1, "The bar property")
> declare('XYZ_BAZ', 2, "reserved for future use")
> 
> and so on.  I'm going to have hundreds of these, so ease of use, ease of
> maintenance, and niceness of presentation are important.

As long as the header file says what you think it says, you're fine.  If you 
encounter a file that does "#define sys", then the sys module is forever 
masked, and your module can't invoke it.  A header file that contains 
"#define declare" will be fun.

Mel.


-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Call Signtool using python

2010-03-02 Thread Matt Mitchell
I think you need to use the /p switch to pass signtool.exe a password
when using the /f switch.
Check out
http://msdn.microsoft.com/en-us/library/8s9b9yaz%28VS.80%29.aspx for
more info.



 
---
The information contained in this electronic message and any attached 
document(s) is intended only for the personal and confidential use of the 
designated recipients named above. This message may be confidential. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this document in error, and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify sender 
immediately by telephone (603) 262-6300 or by electronic mail immediately. 
Thank you.
 
-Original Message-
From: [email protected]
[mailto:[email protected]] On
Behalf Of enda man
Sent: Tuesday, March 02, 2010 6:34 AM
To: [email protected]
Subject: Call Signtool using python

Hi,

I want to call the Windows signtool to sign a binary from a python
script.

Here is my script:
//
os.chdir('./Install/activex/cab')
subprocess.call(["signtool", "sign", "/v", "/f", "webph.pfx", "/t",
"http://timestamp.verisign.com/scripts/timstamp.dll";, "WebPh.exe" ])
//

But I am getting this error:

SignTool Error: The specified PFX password is not correct.

Number of files successfully Signed: 0
Number of warnings: 0
Number of errors: 1
Finished building plugin installer
scons: done building targets.



This python script is called as part of a scons build, which is also
python code.

Anyone seen this before or can pass on any ideas.
Tks,
EM


-- 
http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Python training in Florida, April 27-29

2010-03-02 Thread Mark Lutz
Tired of the Winter weather?  Make your plans now to
attend our upcoming Florida Python training seminar
in April.  This 3-day public class will be held on
April 27-29, in Sarasota, Florida.  It is open to
both individual and group enrollments.

For more details on the class, as well as registration
instructions, please visit the class web page:

http://learning-python.com/2010-public-classes.html

Note that we have moved to a new domain name.  If you
are unable to attend in April, our next Sarasota class
is already scheduled for July 13-15.

Thanks, and we hope to see you at a Python class in
sunny and warm Florida soon.

--Mark Lutz at learning-python.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Roy Smith
On Mar 2, 8:33 am, Steve Holden  wrote:

> And how important is it to make sure that whatever data your program
> processes doesn't overwrite the actual variable names you want to use to
> program the processing?

Oh, I see what you're saying.  You're thinking I was going to machine-
process the C header file and pattern-match the #define statements?
Actually, I was just hand-copying the values, and was looking for a
way to reduce typing.

But, I suppose if I were to machine-process the header files, that
would be a concern.  I suppose in that case I would make sure I only
inserted variables which matched a particular pattern (ie, "[A-Z]+_[A-
Z][A-Z0-9]+").  In fact, now that you got me thinking in that
direction...

Somewhat sadly, in my case, I can't even machine process the header
file.  I don't, strictly speaking, have a header file.  What I have is
a PDF which documents what's in the header file, and I'm manually re-
typing the data out of that.  Sigh.

-- 
http://mail.python.org/mailman/listinfo/python-list


Email Script

2010-03-02 Thread Victor Subervi
Hi;
I have the following code:

def my_mail():
  user, passwd, db, host = login()
  database = MySQLdb.connect(host, user, passwd, db)
  cursor= database.cursor()
  ourEmail1 = '[email protected]'
  ourEmail1 = '[email protected]'
  ourEmail2 = '[email protected]'
  form = cgi.FieldStorage()
  name = form.getfirst('name', '')
  email = form.getfirst('from', '')
  message = form.getfirst('message', '')
  message = 'Name: %s\nMessage: %s' % (name, message)
  subject = 'Message from Web Site'
  Email(
  from_address = email,
  to_address = ourEmail1,
  subject = subject,
  message = message
  ).send()
  Email(
  from_address = email,
  to_address = ourEmail2,
  subject = subject,
  message = message
  ).send()
  print 'Thank you, %s, we will get back to you shortly!' % (name)

This sends only the first of the two emails. Why doesn't it work to send the
second? What do?
TIA,
beno

-- 
The Logos has come to bear
http://logos.13gems.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Email Script

2010-03-02 Thread Victor Subervi
On Tue, Mar 2, 2010 at 11:48 AM, Victor Subervi wrote:

> Hi;
> I have the following code:
>
> def my_mail():
>   user, passwd, db, host = login()
>   database = MySQLdb.connect(host, user, passwd, db)
>   cursor= database.cursor()
>   ourEmail1 = '[email protected]'
>   ourEmail1 = '[email protected]'
>   ourEmail2 = '[email protected]'
>   form = cgi.FieldStorage()
>   name = form.getfirst('name', '')
>   email = form.getfirst('from', '')
>   message = form.getfirst('message', '')
>   message = 'Name: %s\nMessage: %s' % (name, message)
>   subject = 'Message from Web Site'
>   Email(
>   from_address = email,
>   to_address = ourEmail1,
>   subject = subject,
>   message = message
>   ).send()
>   Email(
>   from_address = email,
>   to_address = ourEmail2,
>   subject = subject,
>   message = message
>   ).send()
>   print 'Thank you, %s, we will get back to you shortly!' % (name)
>
> This sends only the first of the two emails. Why doesn't it work to send
> the second? What do?
> TIA,
> beno
>

Should I put a timer between instances of Email?

>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Steve Holden
Roy Smith wrote:
> On Mar 2, 8:33 am, Steve Holden  wrote:
> 
>> And how important is it to make sure that whatever data your program
>> processes doesn't overwrite the actual variable names you want to use to
>> program the processing?
> 
> Oh, I see what you're saying.  You're thinking I was going to machine-
> process the C header file and pattern-match the #define statements?
> Actually, I was just hand-copying the values, and was looking for a
> way to reduce typing.
> 
> But, I suppose if I were to machine-process the header files, that
> would be a concern.  I suppose in that case I would make sure I only
> inserted variables which matched a particular pattern (ie, "[A-Z]+_[A-
> Z][A-Z0-9]+").  In fact, now that you got me thinking in that
> direction...
> 
> Somewhat sadly, in my case, I can't even machine process the header
> file.  I don't, strictly speaking, have a header file.  What I have is
> a PDF which documents what's in the header file, and I'm manually re-
> typing the data out of that.  Sigh.
> 
Don't worry. Now you have revealed the *real* problem you may well find
there are c.l.py readers who can help! Python can read PDFs ...

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread John Posner

On 3/2/2010 10:19 AM, Roy Smith wrote:


Somewhat sadly, in my case, I can't even machine process the header
file.  I don't, strictly speaking, have a header file.  What I have is
a PDF which documents what's in the header file, and I'm manually re-
typing the data out of that.  Sigh.


Here's an idea, perhaps too obvious, to minimize your keystrokes:

1. Create a text file with the essential data:

XYZ_FOO   0  The foo property
XYZ_BAR   1  The bar property
XYZ_BAZ   2  reserved for future use

2. Use a Python script to convert this into the desired code:

declare('XYZ_FOO', 0, "The foo property")
declare('XYZ_BAR', 1, "The bar property")
declare('XYZ_BAZ', 2, "reserved for future use")

Note:

>>> s
'XYZ_FOO   0  The foo property'
>>> s.split(None, 2)
['XYZ_FOO', '0', 'The foo property']

HTH,
John
--
http://mail.python.org/mailman/listinfo/python-list


Re: taking python enterprise level?...

2010-03-02 Thread Aahz
In article ,
D'Arcy J.M. Cain  wrote:
>
>Put as much memory as you can afford/fit into your database server.
>It's the cheapest performance boost you can get.  If you have a serious
>application put at least 4GB into your dedicated database server.
>Swapping is your enemy.

Also, put your log/journal files on a different spindle from the database
files.  That makes a *huge* difference.
-- 
Aahz ([email protected])   <*> http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
-- 
http://mail.python.org/mailman/listinfo/python-list


Queue peek?

2010-03-02 Thread Veloz
Hi all
I'm looking for a queue that I can use with multiprocessing, which has
a peek method.

I've seen some discussion about queue.peek but don't see anything in
the docs about it.

Does python have a queue class with peek semantics?

Michael
-- 
http://mail.python.org/mailman/listinfo/python-list


freebsd and multiprocessing

2010-03-02 Thread Tim Arnold
Hi,
I'm intending to use multiprocessing on a freebsd machine (6.3
release, quad core, 8cpus, amd64). I see in the doc that on this
platform I can't use synchronize:

ImportError: This platform lacks a functioning sem_open
implementation, therefore, the required synchronization primitives
needed will not function, see issue 3770.

As far as I can tell, I have no need to synchronize the processes--I
have several processes run separately and I need to know when they're
all finished; there's no communication between them and each owns its
own log file for output.

Is anyone using multiprocessing on FreeBSD and run into any other
gotchas?
thanks,
--Tim Arnold
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Robert Kern

On 2010-03-01 22:55 PM, Terry Reedy wrote:

On 3/1/2010 7:56 PM, Patrick Maupin wrote:

On Mar 1, 5:57 pm, Erik Max Francis wrote:

Patrick Maupin wrote:
This not only seriously stretching the meaning of the term "superset"
(as Python is most definitely not even remotely a superset of JSON), but


Well, you are entitled to that opinion, but seriously, if I take valid
JSON, replace unquoted true with True, unquoted false with False,
replace unquoted null with None, and take the quoted strings and
replace occurrences of \u with the appropriate unicode, then I do,
in fact, have valid Python. But don't take my word for it -- try it
out!


To me this is so strained that I do not see why why you are arguing the
point. So what? The resulting Python 'program' will be equivalent, I
believe, to 'pass'. Ie, construct objects and then discard them with no
computation or output.


Not if you eval() rather than exec(). It's reasonably well-accepted that JSON is 
very close to being a subset of Python's expression syntax with just a few 
modifications.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is this secure?

2010-03-02 Thread Robert Kern

On 2010-02-28 01:28 AM, Aahz wrote:

In article,
Robert Kern  wrote:


If you are storing the password instead of making your user remember
it, most platforms have some kind of keychain secure password
storage. I recommend reading up on the APIs available on your targeted
platforms.


Are you sure?  I haven't done a lot of research, but my impression was
that Windows didn't have anything built in.


You're right, not built-in, but Windows does provide enough crypto services for 
a cross-platform Python implementation to be built:


  http://pypi.python.org/pypi/keyring

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Carl Banks
On Mar 2, 5:21 am, Roy Smith  wrote:
> In article ,
>  Chris Rebert  wrote:
>
>
>
> > On Mon, Mar 1, 2010 at 8:27 PM, Roy Smith  wrote:
> > > >From inside a module, I want to add a key-value pair to the module's
> > > __dict__.  I know I can just do:
>
> > > FOO = 'bar'
>
> > > at the module top-level, but I've got 'FOO' as a string and what I
> > > really need to do is
>
> > > __dict__['Foo'] = 'bar'
>
> > > When I do that, I get "NameError: name '__dict__' is not defined".  Is
> > > it possible to do what I'm trying to do?
>
> > Yes; just modify the dict returned by the globals() built-in function
> > instead.
>
> Ah, cool.  Thanks.
>
> > It's usually not wise to do this and is better to use a
> > separate dict instead, but I'll assume you know what you're doing and
> > have good reasons to disregard the standard advice due to your
> > use-case.
>
> Why is it unwise?


He didn't say it was unwise.  He said it's usually not wise.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freebsd and multiprocessing

2010-03-02 Thread Philip Semanchuk


On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:


Hi,
I'm intending to use multiprocessing on a freebsd machine (6.3
release, quad core, 8cpus, amd64). I see in the doc that on this
platform I can't use synchronize:

ImportError: This platform lacks a functioning sem_open
implementation, therefore, the required synchronization primitives
needed will not function, see issue 3770.

As far as I can tell, I have no need to synchronize the processes--I
have several processes run separately and I need to know when they're
all finished; there's no communication between them and each owns its
own log file for output.

Is anyone using multiprocessing on FreeBSD and run into any other
gotchas?


Hi Tim,
I don't use multiprocessing but I've written two low-level IPC  
packages, one for SysV IPC and the other for POSIX IPC.


I think that multiprocessing prefers POSIX IPC (which is where  
sem_open() comes from). I don't know what it uses if that's not  
available, but SysV IPC seems a likely alternative. I must emphasize,  
however, that that's a guess on my part.


FreeBSD didn't have POSIX IPC support until 7.0, and that was sort of  
broken until 7.2. As it happens, I was testing my POSIX IPC code  
against 7.2 last night and it works just fine.


SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is the  
oldest I've tested). ISTR that by default each message queue is  
limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably  
tell you that and may even let you change it. Other than that I can't  
think of any SysV limitations that might bite you.


HTH
Philip



--
http://mail.python.org/mailman/listinfo/python-list


Re: Email Script

2010-03-02 Thread Victor Subervi
On Tue, Mar 2, 2010 at 11:53 AM, Victor Subervi wrote:

> On Tue, Mar 2, 2010 at 11:48 AM, Victor Subervi 
> wrote:
>
>> Hi;
>> I have the following code:
>>
>> def my_mail():
>>   user, passwd, db, host = login()
>>   database = MySQLdb.connect(host, user, passwd, db)
>>   cursor= database.cursor()
>>   ourEmail1 = '[email protected]'
>>   ourEmail1 = '[email protected]'
>>   ourEmail2 = '[email protected]'
>>   form = cgi.FieldStorage()
>>   name = form.getfirst('name', '')
>>   email = form.getfirst('from', '')
>>   message = form.getfirst('message', '')
>>   message = 'Name: %s\nMessage: %s' % (name, message)
>>   subject = 'Message from Web Site'
>>   Email(
>>   from_address = email,
>>   to_address = ourEmail1,
>>   subject = subject,
>>   message = message
>>   ).send()
>>   Email(
>>   from_address = email,
>>   to_address = ourEmail2,
>>   subject = subject,
>>   message = message
>>   ).send()
>>   print 'Thank you, %s, we will get back to you shortly!' % (name)
>>
>> This sends only the first of the two emails. Why doesn't it work to send
>> the second? What do?
>> TIA,
>> beno
>>
>
> Should I put a timer between instances of Email?
>
>>
>> Adding a timer didn't work. Do I need to explicitly close the smtp
connection before sending the next email? I don't understand why there's a
problem.
TIA,
beno
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread MRAB

John Posner wrote:

On 3/2/2010 10:19 AM, Roy Smith wrote:


Somewhat sadly, in my case, I can't even machine process the header
file.  I don't, strictly speaking, have a header file.  What I have is
a PDF which documents what's in the header file, and I'm manually re-
typing the data out of that.  Sigh.


Here's an idea, perhaps too obvious, to minimize your keystrokes:

1. Create a text file with the essential data:

XYZ_FOO   0  The foo property
XYZ_BAR   1  The bar property
XYZ_BAZ   2  reserved for future use

2. Use a Python script to convert this into the desired code:

declare('XYZ_FOO', 0, "The foo property")
declare('XYZ_BAR', 1, "The bar property")
declare('XYZ_BAZ', 2, "reserved for future use")

Note:

 >>> s
'XYZ_FOO   0  The foo property'
 >>> s.split(None, 2)
['XYZ_FOO', '0', 'The foo property']


You might be able to reduce your typing by copy-and-pasting the relevant
text from the PDF into an editor and then editing it.
--
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread MRAB

Albert Hopkins wrote:

I have a snippet of code that looks like this:

pid, fd = os.forkpty()
if pid == 0:
subprocess.call(args)
else:
input = os.fdopen(fd).read()
...


This seems to work find for CPython 2.5 and 2.6 on my Linux system.
However, with CPython 3.1 I get:

input = os.fdopen(fd).read()
IOError: [Errno 5] Input/output error

Is there something wrong in Python 3.1?  Is this the correct way to do
this (run a process in a pseudo-tty and read it's output) or is there
another way I should/could be doing this?


The documentation also mentions the 'pty' module. Have you tried that
instead?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Call Signtool using python

2010-03-02 Thread enda man
On Mar 2, 2:46 pm, "Matt Mitchell"  wrote:
> I think you need to use the /p switch to pass signtool.exe a password
> when using the /f switch.
> Check outhttp://msdn.microsoft.com/en-us/library/8s9b9yaz%28VS.80%29.aspxfor
> more info.
>
> ---
> The information contained in this electronic message and any attached 
> document(s) is intended only for the personal and confidential use of the 
> designated recipients named above. This message may be confidential. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this document in error, and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> sender immediately by telephone (603) 262-6300 or by electronic mail 
> immediately. Thank you.
>
> -Original Message-
> From: [email protected]
>
> [mailto:[email protected]] On
> Behalf Of enda man
> Sent: Tuesday, March 02, 2010 6:34 AM
> To: [email protected]
> Subject: Call Signtool using python
>
> Hi,
>
> I want to call the Windows signtool to sign a binary from a python
> script.
>
> Here is my script:
> //
> os.chdir('./Install/activex/cab')
> subprocess.call(["signtool", "sign", "/v", "/f", "webph.pfx", "/t",
> "http://timestamp.verisign.com/scripts/timstamp.dll";, "WebPh.exe" ])
> //
>
> But I am getting this error:
> 
> SignTool Error: The specified PFX password is not correct.
>
> Number of files successfully Signed: 0
> Number of warnings: 0
> Number of errors: 1
> Finished building plugin installer
> scons: done building targets.
> 
>
> This python script is called as part of a scons build, which is also
> python code.
>
> Anyone seen this before or can pass on any ideas.
> Tks,
> EM
>
> --http://mail.python.org/mailman/listinfo/python-list
>
>

The same command works when run from the command line but when I run
it from python  using subprocess.call(...) it fails as described in my
first post.  I do not need to use the /p switch as the password is
embedded in the pfx file.
That is why I think it is something I am or am not doing in python.

EM



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Terry Reedy

On 3/2/2010 11:18 AM, John Posner wrote:

On 3/2/2010 10:19 AM, Roy Smith wrote:


Somewhat sadly, in my case, I can't even machine process the header
file. I don't, strictly speaking, have a header file. What I have is
a PDF which documents what's in the header file, and I'm manually re-
typing the data out of that. Sigh.


There are Python modules to read/write pdf.


Here's an idea, perhaps too obvious, to minimize your keystrokes:

1. Create a text file with the essential data:

XYZ_FOO 0 The foo property
XYZ_BAR 1 The bar property
XYZ_BAZ 2 reserved for future use

2. Use a Python script to convert this into the desired code:

declare('XYZ_FOO', 0, "The foo property")
declare('XYZ_BAR', 1, "The bar property")
declare('XYZ_BAZ', 2, "reserved for future use")

Note:

 >>> s
'XYZ_FOO 0 The foo property'
 >>> s.split(None, 2)
['XYZ_FOO', '0', 'The foo property']


Given that set of triples is constant, I would think about having the 
Python script do the computation just once, instead of with every 
inport. In other words, the script should *call* the declare function 
and then write out the resulting set of dicts either to a .py or pickle 
file.


tjr


--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Andreas Waldenburger
On Mon, 01 Mar 2010 21:09:39 + Mark Lawrence
 wrote:

> Andreas Waldenburger wrote:
> > [snip]
> > We did not buy code. If it were written in C or such, we would never
> > get to see it.
> > 
> > It's not our concern.
> > 
> > /W
> > 
> 
>  From your original post.
> 
> 
> a company that works with my company writes a lot of of their code in
> Python (lucky jerks). I've seen their code and it basically looks like
> this:
> 
> 
> So what is the relationship between your company and this other
> company?
They write a piece of software that we run as a component in a software
ecosystem that we (and others) have developed.


> When it gets down to pounds, shillings and pence (gosh, I'm
> old!:) it sure as hell could make a tremendous difference in the long
> term, given that usually maintainance costs are astronomical when
> compared to initial development costs.
> 
I totally agree, but as long as their software works, why should we
care how their code looks? It's totally their responsibility. Why
should we nanny every partner we have?

Don't get me wrong; our whole system is more fragile than I find
comfortable. But I guess getting 10ish different parties around the
globe to work in complete unison is quite a feat, and I'm surprised it
even works as it is. But it does, and I'm glad we don't have to
micromanage other people's code.


/W


-- 
INVALID? DE!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Andreas Waldenburger
On Tue, 02 Mar 2010 09:48:47 +1100 Ben Finney
 wrote:

> > It's not our concern.  
> 
> Then I don't see what that problem is.

There is none. I was griping about how stupid they are. That is a
personal problem I have with their *code* (not software), and I thought
I'd just share my superiority complex with everyone.

I had hoped that everyone just read it, went like "Oh geez.", smiled it
off with a hint of lesson learned and got back to whatever it was they
were doing. Alas, I was wrong ... and I'm sorry.

/W

-- 
INVALID? DE!

-- 
http://mail.python.org/mailman/listinfo/python-list


conditional import into global namespace

2010-03-02 Thread mk

Hello everyone,

I have a class that is dependent on subprocess functionality. I would 
like to make it self-contained in the sense that it would import 
subprocess if it's not imported yet.


What is the best way to proceed with this?

I see a few possibilities:

1. do a class level import, like:

class TimeSync(object):

import subprocess


2. do an import in init, which is worse bc it's ran every time an 
instance is created:


def __init__(self, shiftsec, ntpserver):
import subprocess


Both of those methods have disadvantage in this context, though: they 
create 'subprocess' namespace in a class or instance, respectively.


Is there anyway to make it a global import?

Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


case do problem

2010-03-02 Thread Tracubik
hi, i've to convert from Pascal this code:

iterations=0;
count=0;
REPEAT;
  iterations = iterations+1;
  ...
  IF (genericCondition) THEN count=count+1;
  ...
  CASE count OF:
1: m = 1
2: m = 10
3: m = 100
UNTIL count = 4 OR iterations = 20

i do something like this:

iterations = 0
count = 0

m_Switch = (1,10,100)

while True:
iterations +=1
...
if (genericCondition):
count +=1
...
try:
m = m_Switch[count-1]
except: pass
if count = 4 or iterations = 20

the problem is that when count = 4 m_Switch[4-1] have no value, so i use 
the try..except.

Is there a better solution to solve this problem? and, generally 
speaking, the try..except block slow down the execution of the program or 
not?

Thank you in advance
Nico

-- 
http://mail.python.org/mailman/listinfo/python-list


Broken references in postings

2010-03-02 Thread Grant Edwards
I've noticed recently that a lot of the "refernces" and
"in-reply-to" headers in c.l.p are broken, resulting in the
inability to move from a child to a parent in a tree.

For example in a recent reply (subejct: os.fdopen() issue in
Python 3.1?), the references and in-reply-to headers both
contained:

  1267539898.477222.7.ca...@necropolis

However, the article replied to has a message-ID header of 

  [email protected]

I don't see 1267539898.477222.7.ca...@necropolis anywhere in
the headers of the referrant.

Is something broken in the mail<->news gateway?

Or is it just individual news/mail clients that are broken?

-- 
Grant Edwards   grant.b.edwardsYow! I own seven-eighths of
  at   all the artists in downtown
  gmail.comBurbank!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-02 Thread Tracubik
additional information:

when count=4 i haven't to change the m value, so i have to do nothing or 
something like m = m

Nico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Call Signtool using python

2010-03-02 Thread Chris Rebert
On Tue, Mar 2, 2010 at 3:34 AM, enda man  wrote:
> Hi,
>
> I want to call the Windows signtool to sign a binary from a python
> script.
>
> Here is my script:
> //
> os.chdir('./Install/activex/cab')
> subprocess.call(["signtool", "sign", "/v", "/f", "webph.pfx", "/t",
> "http://timestamp.verisign.com/scripts/timstamp.dll";, "WebPh.exe" ])
> //
>
> But I am getting this error:
> 
> SignTool Error: The specified PFX password is not correct.
>
> Number of files successfully Signed: 0
> Number of warnings: 0
> Number of errors: 1
> Finished building plugin installer
> scons: done building targets.
> 
>
>
> This python script is called as part of a scons build, which is also
> python code.
>
> Anyone seen this before or can pass on any ideas.

Nothing looks obviously wrong (though I'm unfamiliar with signtool).
Have you tried specifying an absolute path to webph.pfx?

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Terry Reedy

On 3/2/2010 11:34 AM, Robert Kern wrote:

On 2010-03-01 22:55 PM, Terry Reedy wrote:

On 3/1/2010 7:56 PM, Patrick Maupin wrote:

On Mar 1, 5:57 pm, Erik Max Francis wrote:

Patrick Maupin wrote:
This not only seriously stretching the meaning of the term "superset"
(as Python is most definitely not even remotely a superset of JSON),
but


Well, you are entitled to that opinion, but seriously, if I take valid
JSON, replace unquoted true with True, unquoted false with False,
replace unquoted null with None, and take the quoted strings and
replace occurrences of \u with the appropriate unicode, then I do,
in fact, have valid Python. But don't take my word for it -- try it
out!


To me this is so strained that I do not see why why you are arguing the
point. So what? The resulting Python 'program' will be equivalent, I
believe, to 'pass'. Ie, construct objects and then discard them with no
computation or output.


Not if you eval() rather than exec().


>>> eval(1)

creates and objects and discards it, with a net result of 'pass'.
What do you think I am missing.

It's reasonably well-accepted that

JSON is very close to being a subset of Python's expression syntax with
just a few modifications.


It is specifically JavaScript Object Notation, which is very similar to 
a subset of Python's object notation (number and string literals and 
list and dict displays (but not set displays), and three named 
constants). Without operators, it barely qualifies, to me, even as 
'expression syntax'.


To me, comparing object notation with programming language is not 
helpful to the OP's purpose. His main claim is that JSON can be usefully 
extended but that YAML is too much, so that perhaps he, with help, can 
find a 'sweet spot' in between.


Terry Jan Reedy




--
http://mail.python.org/mailman/listinfo/python-list


Re: freebsd and multiprocessing

2010-03-02 Thread Tim Arnold
On Mar 2, 11:52 am, Philip Semanchuk  wrote:
> On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:
>
>
>
>
>
> > Hi,
> > I'm intending to use multiprocessing on a freebsd machine (6.3
> > release, quad core, 8cpus, amd64). I see in the doc that on this
> > platform I can't use synchronize:
>
> > ImportError: This platform lacks a functioning sem_open
> > implementation, therefore, the required synchronization primitives
> > needed will not function, see issue 3770.
>
> > As far as I can tell, I have no need to synchronize the processes--I
> > have several processes run separately and I need to know when they're
> > all finished; there's no communication between them and each owns its
> > own log file for output.
>
> > Is anyone using multiprocessing on FreeBSD and run into any other
> > gotchas?
>
> Hi Tim,
> I don't use multiprocessing but I've written two low-level IPC  
> packages, one for SysV IPC and the other for POSIX IPC.
>
> I think that multiprocessing prefers POSIX IPC (which is where  
> sem_open() comes from). I don't know what it uses if that's not  
> available, but SysV IPC seems a likely alternative. I must emphasize,  
> however, that that's a guess on my part.
>
> FreeBSD didn't have POSIX IPC support until 7.0, and that was sort of  
> broken until 7.2. As it happens, I was testing my POSIX IPC code  
> against 7.2 last night and it works just fine.
>
> SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is the  
> oldest I've tested). ISTR that by default each message queue is  
> limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably  
> tell you that and may even let you change it. Other than that I can't  
> think of any SysV limitations that might bite you.
>
> HTH
> Philip

Hi Philip,
Thanks for that information. I wish I could upgrade the machine to
7.2! alas, out of my power.  I get the following results from sysctl:
% sysctl kern.ipc | grep msg
kern.ipc.msgseg: 2048
kern.ipc.msgssz: 8
kern.ipc.msgtql: 40
kern.ipc.msgmnb: 2048
kern.ipc.msgmni: 40
kern.ipc.msgmax: 16384

I'll write some test programs using multiprocessing and see how they
go before committing to rewrite my current code. I've also been
looking at 'parallel python' although it may have the same issues.
http://www.parallelpython.com/

thanks again,
--Tim
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-02 Thread Alf P. Steinbach

* Tracubik:

hi, i've to convert from Pascal this code:

iterations=0;
count=0;
REPEAT;
  iterations = iterations+1;
  ...
  IF (genericCondition) THEN count=count+1;
  ...
  CASE count OF:
1: m = 1
2: m = 10
3: m = 100


Uhm, is this syntactically valid Pascal? As I recall, every Pascal construct was 
delimited in some way. Once I had the complete Pascal syntax in my head, but 
alas, not anymore...




UNTIL count = 4 OR iterations = 20

i do something like this:

iterations = 0
count = 0

m_Switch = (1,10,100)

while True:
iterations +=1
...
if (genericCondition):
count +=1
...
try:
m = m_Switch[count-1]
except: pass
if count = 4 or iterations = 20

the problem is that when count = 4 m_Switch[4-1] have no value, so i use 
the try..except.


  iterations = 0
  count = 0
  while not( count == 4 or iterations == 20 ):
  iterations += 1
  # ...
  if generic_condition:
  count += 1
  # ...
  m = (1, 10, 100, 100)[count]



Is there a better solution to solve this problem?


Define "better". Do you mean faster, more clear, shorter, using less memory, 
what?

Above I've assumed that you want to get rid of the try block, since that's what 
you're asking about:



and, generally 
speaking, the try..except block slow down the execution of the program or 
not?


Probably, but don't think about it. Python programming is at a level where that 
kind of efficiency doesn't count. Or, ideally it shouldn't count.



Cheers & hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-02 Thread Alf P. Steinbach

* Alf P. Steinbach:

* Tracubik:

hi, i've to convert from Pascal this code:

iterations=0;
count=0;
REPEAT;
  iterations = iterations+1;
  ...
  IF (genericCondition) THEN count=count+1;
  ...
  CASE count OF:
1: m = 1
2: m = 10
3: m = 100


Uhm, is this syntactically valid Pascal? As I recall, every Pascal 
construct was delimited in some way. Once I had the complete Pascal 
syntax in my head, but alas, not anymore...




UNTIL count = 4 OR iterations = 20

i do something like this:

iterations = 0
count = 0

m_Switch = (1,10,100)

while True:
iterations +=1
...
if (genericCondition):
count +=1
...
try:
m = m_Switch[count-1]
except: pass
if count = 4 or iterations = 20

the problem is that when count = 4 m_Switch[4-1] have no value, so i 
use the try..except.


  iterations = 0
  count = 0
  while not( count == 4 or iterations == 20 ):
  iterations += 1
  # ...
  if generic_condition:
  count += 1
  # ...
  m = (1, 10, 100, 100)[count]


Add one extra 100 there.



Is there a better solution to solve this problem?


Define "better". Do you mean faster, more clear, shorter, using less 
memory, what?


Above I've assumed that you want to get rid of the try block, since 
that's what you're asking about:



and, generally speaking, the try..except block slow down the execution 
of the program or not?


Probably, but don't think about it. Python programming is at a level 
where that kind of efficiency doesn't count. Or, ideally it shouldn't 
count.



Cheers & hth.,

- Alf

--
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-02 Thread Alf P. Steinbach

* Alf P. Steinbach:

* Alf P. Steinbach:

* Tracubik:

hi, i've to convert from Pascal this code:

iterations=0;
count=0;
REPEAT;
  iterations = iterations+1;
  ...
  IF (genericCondition) THEN count=count+1;
  ...
  CASE count OF:
1: m = 1
2: m = 10
3: m = 100


Uhm, is this syntactically valid Pascal? As I recall, every Pascal 
construct was delimited in some way. Once I had the complete Pascal 
syntax in my head, but alas, not anymore...




UNTIL count = 4 OR iterations = 20

i do something like this:

iterations = 0
count = 0

m_Switch = (1,10,100)

while True:
iterations +=1
...
if (genericCondition):
count +=1
...
try:
m = m_Switch[count-1]
except: pass
if count = 4 or iterations = 20

the problem is that when count = 4 m_Switch[4-1] have no value, so i 
use the try..except.


  iterations = 0
  count = 0
  while not( count == 4 or iterations == 20 ):
  iterations += 1
  # ...
  if generic_condition:
  count += 1
  # ...
  m = (1, 10, 100, 100)[count]


Add one extra 100 there.


Oh dear, it's one of those days.

   if 1 <= count <= 3:
   m = (1, 10, 100)[count - 1]



Is there a better solution to solve this problem?


Define "better". Do you mean faster, more clear, shorter, using less 
memory, what?


Above I've assumed that you want to get rid of the try block, since 
that's what you're asking about:



and, generally speaking, the try..except block slow down the 
execution of the program or not?


Probably, but don't think about it. Python programming is at a level 
where that kind of efficiency doesn't count. Or, ideally it shouldn't 
count.



Cheers & hth.,

- Alf

--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Jean-Michel Pichavant

Andreas Waldenburger wrote:

On Tue, 02 Mar 2010 09:48:47 +1100 Ben Finney
 wrote:

  
It's not our concern.  
  

Then I don't see what that problem is.



There is none. I was griping about how stupid they are. That is a
personal problem I have with their *code* (not software), and I thought
I'd just share my superiority complex with everyone.

I had hoped that everyone just read it, went like "Oh geez.", smiled it
off with a hint of lesson learned and got back to whatever it was they
were doing. Alas, I was wrong ... and I'm sorry.

/W

  
There's something wrong saying that stupid people write working code 
that totally satisfies your needs. Don't you agree ? ;-)


JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread Raymond Hettinger
On Mar 2, 8:29 am, Veloz  wrote:
> Hi all
> I'm looking for a queue that I can use with multiprocessing, which has
> a peek method.
>
> I've seen some discussion about queue.peek but don't see anything in
> the docs about it.
>
> Does python have a queue class with peek semantics?

Am curious about your use case?  Why peek at something
that could be gone by the time you want to use it.

  val = q.peek()
  if something_i_want(val):
   v2 = q.get() # this could be different than val

Wouldn't it be better to just get() the value and return if you don't
need it?

  val = q.peek()
  if not something_i_want(val):
  q.put(val)


Raymond



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: conditional import into global namespace

2010-03-02 Thread MRAB

mk wrote:

Hello everyone,

I have a class that is dependent on subprocess functionality. I would 
like to make it self-contained in the sense that it would import 
subprocess if it's not imported yet.


What is the best way to proceed with this?

I see a few possibilities:

1. do a class level import, like:

class TimeSync(object):

import subprocess


2. do an import in init, which is worse bc it's ran every time an 
instance is created:


def __init__(self, shiftsec, ntpserver):
import subprocess


Both of those methods have disadvantage in this context, though: they 
create 'subprocess' namespace in a class or instance, respectively.


Is there anyway to make it a global import?


The simplest solution is just import it at the top of the module.
--
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread Terry Reedy

On 3/2/2010 9:24 AM, Albert Hopkins wrote:

I have a snippet of code that looks like this:

 pid, fd = os.forkpty()
 if pid == 0:
 subprocess.call(args)
 else:
 input = os.fdopen(fd).read()
 ...


This seems to work find for CPython 2.5 and 2.6 on my Linux system.


To get help, or report a bug, for something like this, be as specific as 
possible. 'Linux' may be too generic.



However, with CPython 3.1 I get:

 input = os.fdopen(fd).read()
 IOError: [Errno 5] Input/output error

Is there something wrong in Python 3.1?  Is this the correct way to do
this (run a process in a pseudo-tty and read it's output) or is there
another way I should/could be doing this?


No idea, however, the first thing I would do is call the .fdopen and 
.read methods separately (on separate lines) to isolate which is raising 
the error.


tjr



--
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-02 Thread MRAB

Tracubik wrote:

hi, i've to convert from Pascal this code:

iterations=0;
count=0;
REPEAT;
  iterations = iterations+1;
  ...
  IF (genericCondition) THEN count=count+1;
  ...
  CASE count OF:
1: m = 1
2: m = 10
3: m = 100
UNTIL count = 4 OR iterations = 20

i do something like this:

iterations = 0
count = 0

m_Switch = (1,10,100)

while True:
iterations +=1
...
if (genericCondition):
count +=1
...
try:
m = m_Switch[count-1]
except: pass
if count = 4 or iterations = 20

the problem is that when count = 4 m_Switch[4-1] have no value, so i use 
the try..except.


Is there a better solution to solve this problem? and, generally 
speaking, the try..except block slow down the execution of the program or 
not?



Use a dict:

m_Switch = {1: 1, 2: 10, 3: 100}

and then catch the KeyError.

Don't use a bare 'except', catch the specific exception you want to
catch, and don't worry about the speed unless you discover that it's
real problem.
--
http://mail.python.org/mailman/listinfo/python-list


Re: conditional import into global namespace

2010-03-02 Thread Jerry Hill
On Tue, Mar 2, 2010 at 12:46 PM, mk  wrote:
> Hello everyone,
>
> I have a class that is dependent on subprocess functionality. I would like
> to make it self-contained in the sense that it would import subprocess if
> it's not imported yet.
>
> What is the best way to proceed with this?

Just import subprocess at the top of your module.  If subprocess
hasn't been imported yet, it will be imported when your module is
loaded.  If it's already been imported, your module will use the
cached version that's already been imported.

In other words, it sounds like Python already does what you want.  You
don't need to do anything special.

-- 
Jerry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Email Script

2010-03-02 Thread mk


Where do you take class Email from? There's no info in your mail on this.

Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an assembler in Python

2010-03-02 Thread Albert van der Horst
In article ,
Giorgos Tzampanakis   wrote:
>I'm implementing a CPU that will run on an FPGA. I want to have a
>(dead) simple assembler that will generate the machine code for
>me. I want to use Python for that. Are there any libraries that
>can help me with the parsing of the assembly code?

I have a pentium assembler in perl on my website below.
(postit-fixup principle).
You could borrow some idea's, if you can read perl.
The main purpose is to have a very simple and straightforward
assembler at the expense of ease of use.

Groetjes Albert
--
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
alb...@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freebsd and multiprocessing

2010-03-02 Thread Tim Arnold
On Mar 2, 12:59 pm, Tim Arnold  wrote:
> On Mar 2, 11:52 am, Philip Semanchuk  wrote:
> > On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:
>
> > > Hi,
> > > I'm intending to use multiprocessing on a freebsd machine (6.3
> > > release, quad core, 8cpus, amd64). I see in the doc that on this
> > > platform I can't use synchronize:
>
> > > ImportError: This platform lacks a functioning sem_open
> > > implementation, therefore, the required synchronization primitives
> > > needed will not function, see issue 3770.
>
> > > As far as I can tell, I have no need to synchronize the processes--I
> > > have several processes run separately and I need to know when they're
> > > all finished; there's no communication between them and each owns its
> > > own log file for output.
>
> > > Is anyone using multiprocessing on FreeBSD and run into any other
> > > gotchas?
>
> > Hi Tim,
> > I don't use multiprocessing but I've written two low-level IPC  
> > packages, one for SysV IPC and the other for POSIX IPC.
>
> > I think that multiprocessing prefers POSIX IPC (which is where  
> > sem_open() comes from). I don't know what it uses if that's not  
> > available, but SysV IPC seems a likely alternative. I must emphasize,  
> > however, that that's a guess on my part.
>
> > FreeBSD didn't have POSIX IPC support until 7.0, and that was sort of  
> > broken until 7.2. As it happens, I was testing my POSIX IPC code  
> > against 7.2 last night and it works just fine.
>
> > SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is the  
> > oldest I've tested). ISTR that by default each message queue is  
> > limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably  
> > tell you that and may even let you change it. Other than that I can't  
> > think of any SysV limitations that might bite you.
>
> > HTH
> > Philip
>
> Hi Philip,
> Thanks for that information. I wish I could upgrade the machine to
> 7.2! alas, out of my power.  I get the following results from sysctl:
> % sysctl kern.ipc | grep msg
> kern.ipc.msgseg: 2048
> kern.ipc.msgssz: 8
> kern.ipc.msgtql: 40
> kern.ipc.msgmnb: 2048
> kern.ipc.msgmni: 40
> kern.ipc.msgmax: 16384
>
> I'll write some test programs using multiprocessing and see how they
> go before committing to rewrite my current code. I've also been
> looking at 'parallel python' although it may have the same 
> issues.http://www.parallelpython.com/
>
> thanks again,
> --Tim

Well that didn't work out well. I can't import either Queue or Pool
from multiprocessing, so I'm back to the drawing board. I'll see now
how parallel python does on freebsd.

--Tim

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Email Script

2010-03-02 Thread Victor Subervi
On Tue, Mar 2, 2010 at 12:56 PM, Victor Subervi wrote:

> On Tue, Mar 2, 2010 at 11:53 AM, Victor Subervi 
> wrote:
>
>> On Tue, Mar 2, 2010 at 11:48 AM, Victor Subervi 
>> wrote:
>>
>>> Hi;
>>> I have the following code:
>>>
>>> def my_mail():
>>>   user, passwd, db, host = login()
>>>   database = MySQLdb.connect(host, user, passwd, db)
>>>   cursor= database.cursor()
>>>   ourEmail1 = '[email protected]'
>>>   ourEmail1 = '[email protected]'
>>>   ourEmail2 = '[email protected]'
>>>   form = cgi.FieldStorage()
>>>   name = form.getfirst('name', '')
>>>   email = form.getfirst('from', '')
>>>   message = form.getfirst('message', '')
>>>   message = 'Name: %s\nMessage: %s' % (name, message)
>>>   subject = 'Message from Web Site'
>>>   Email(
>>>   from_address = email,
>>>   to_address = ourEmail1,
>>>   subject = subject,
>>>   message = message
>>>   ).send()
>>>   Email(
>>>   from_address = email,
>>>   to_address = ourEmail2,
>>>   subject = subject,
>>>   message = message
>>>   ).send()
>>>   print 'Thank you, %s, we will get back to you shortly!' % (name)
>>>
>>> This sends only the first of the two emails. Why doesn't it work to send
>>> the second? What do?
>>> TIA,
>>> beno
>>>
>>
>> Should I put a timer between instances of Email?
>>
>>>
>>> Adding a timer didn't work. Do I need to explicitly close the smtp
> connection before sending the next email? I don't understand why there's a
> problem.
> TIA,
> beno
>

Seems like the problem was I had to import simplemail for each of the
my_email functions.
beno

-- 
The Logos has come to bear
http://logos.13gems.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Robert Kern

On 2010-03-02 11:59 AM, Terry Reedy wrote:

On 3/2/2010 11:34 AM, Robert Kern wrote:

On 2010-03-01 22:55 PM, Terry Reedy wrote:

On 3/1/2010 7:56 PM, Patrick Maupin wrote:

On Mar 1, 5:57 pm, Erik Max Francis wrote:

Patrick Maupin wrote:
This not only seriously stretching the meaning of the term "superset"
(as Python is most definitely not even remotely a superset of JSON),
but


Well, you are entitled to that opinion, but seriously, if I take valid
JSON, replace unquoted true with True, unquoted false with False,
replace unquoted null with None, and take the quoted strings and
replace occurrences of \u with the appropriate unicode, then I do,
in fact, have valid Python. But don't take my word for it -- try it
out!


To me this is so strained that I do not see why why you are arguing the
point. So what? The resulting Python 'program' will be equivalent, I
believe, to 'pass'. Ie, construct objects and then discard them with no
computation or output.


Not if you eval() rather than exec().


 >>> eval(1)

creates and objects and discards it, with a net result of 'pass'.
What do you think I am missing.


x = eval('1')


It's reasonably well-accepted that

JSON is very close to being a subset of Python's expression syntax with
just a few modifications.


It is specifically JavaScript Object Notation, which is very similar to
a subset of Python's object notation (number and string literals and
list and dict displays (but not set displays), and three named
constants). Without operators, it barely qualifies, to me, even as
'expression syntax'.


Literal expression syntax, then.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread Veloz
On Mar 2, 1:18 pm, Raymond Hettinger  wrote:
> On Mar 2, 8:29 am, Veloz  wrote:
>
> > Hi all
> > I'm looking for a queue that I can use with multiprocessing, which has
> > a peek method.
>
> > I've seen some discussion about queue.peek but don't see anything in
> > the docs about it.
>
> > Does python have a queue class with peek semantics?
>
> Am curious about your use case?  Why peek at something
> that could be gone by the time you want to use it.
>
>   val = q.peek()
>   if something_i_want(val):
>        v2 = q.get()         # this could be different than val
>
> Wouldn't it be better to just get() the value and return if you don't
> need it?
>
>   val = q.peek()
>   if not something_i_want(val):
>       q.put(val)
>
> Raymond

Yeah, I hear you. Perhaps queue is not the best solution. My highest
level use case is this:  The user visits a web page (my app is a
Pylons app) and requests a "report" be created. The report takes too
long to create and display on the spot, so the user expects to visit
some url "later" and see if the specific report has completed, and if
so, have it returned to them.

At a lower level, I'm thinking of using some process workers to create
these reports in the background; there'd be a request queue (into
which requests for reports would go, each with an ID) and a completion
queue, into which the workers would write an entry when a report was
created, along with an ID matching the original request.

The "peek" parts comes in when the user comes back later to see if
their report has done. That is, in my page controller logic, I'd like
to look through the complete queue and see if the specific report has
been finished (I could tell by matching up the ID of the original
request to the ID in the completed queue). If there was an item in the
queue matching the ID, it would be removed.

It's since occurred to me that perhaps a queue is not the best way to
handle the completions. (We're ignoring the file system as a solution
for the time being, and focusing on in-memory structures). I'm
wondering now if a simple array of completed items wouldn't be better.
Of course, all the access to the array would have to be thread/process-
proof.  As you pointed out, for example, multi-part operations such as
"is such-and-such an ID in the list? If so, remove it and return in"
would have to be treated atomically to avoid concurrency issues.

Any thoughts on this design approach are welcomed :-)
Michael

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freebsd and multiprocessing

2010-03-02 Thread Philip Semanchuk


On Mar 2, 2010, at 1:31 PM, Tim Arnold wrote:


On Mar 2, 12:59 pm, Tim Arnold  wrote:

On Mar 2, 11:52 am, Philip Semanchuk  wrote:

On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:



Hi,
I'm intending to use multiprocessing on a freebsd machine (6.3
release, quad core, 8cpus, amd64). I see in the doc that on this
platform I can't use synchronize:



ImportError: This platform lacks a functioning sem_open
implementation, therefore, the required synchronization primitives
needed will not function, see issue 3770.


As far as I can tell, I have no need to synchronize the  
processes--I
have several processes run separately and I need to know when  
they're
all finished; there's no communication between them and each owns  
its

own log file for output.



Is anyone using multiprocessing on FreeBSD and run into any other
gotchas?



Hi Tim,
I don't use multiprocessing but I've written two low-level IPC
packages, one for SysV IPC and the other for POSIX IPC.



I think that multiprocessing prefers POSIX IPC (which is where
sem_open() comes from). I don't know what it uses if that's not
available, but SysV IPC seems a likely alternative. I must  
emphasize,

however, that that's a guess on my part.


FreeBSD didn't have POSIX IPC support until 7.0, and that was sort  
of

broken until 7.2. As it happens, I was testing my POSIX IPC code
against 7.2 last night and it works just fine.


SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is  
the

oldest I've tested). ISTR that by default each message queue is
limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably
tell you that and may even let you change it. Other than that I  
can't

think of any SysV limitations that might bite you.



HTH
Philip


Hi Philip,
Thanks for that information. I wish I could upgrade the machine to
7.2! alas, out of my power.  I get the following results from sysctl:
% sysctl kern.ipc | grep msg
kern.ipc.msgseg: 2048
kern.ipc.msgssz: 8
kern.ipc.msgtql: 40
kern.ipc.msgmnb: 2048
kern.ipc.msgmni: 40
kern.ipc.msgmax: 16384

I'll write some test programs using multiprocessing and see how they
go before committing to rewrite my current code. I've also been
looking at 'parallel python' although it may have the same 
issues.http://www.parallelpython.com/

thanks again,
--Tim


Well that didn't work out well. I can't import either Queue or Pool
from multiprocessing, so I'm back to the drawing board. I'll see now
how parallel python does on freebsd.



Sorry to hear that didn't work for you. Should you need to get down to  
the nuts & bolts level, my module for SysV IPC is here:

http://semanchuk.com/philip/sysv_ipc/


Good luck with Parallel Python,
Philip

--
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Patrick Maupin
On Mar 2, 11:59 am, Terry Reedy  wrote:

> To me, comparing object notation with programming language is not
> helpful to the OP's purpose.

Yes, I agree, it was a distraction.  I fell into the trap of
responding to the ludicrous claim that "if X is a superset of Y, then
X cannot possibly look better than Y" (a claim made by multiple people
all thinking it was clever) by showing that Y has other supersets that
do in fact look better than Y.  In doing this, I made the mistake of
choosing a superset of an analogue to Y, rather than to Y itself.
When called out on it, I showed that, in fact, the actual X that IS a
superset of Y can be used in a way that looks better.  However, you
are right that JSON is such a small subset of JS that it's really
pretty ridiculous to even compare them, but that still makes the point
that the original argument I was trying to refute is completely
specious.  In retrospect, though, I should have chosen a better way to
make that point, because I let myself get caught up in making and then
defending a flippant statement that I don't really care about one way
or the other.

> His main claim is that JSON can be usefully
> extended but that YAML is too much, so that perhaps he, with help, can
> find a 'sweet spot' in between.

An excellent summary of my position.

Thanks,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread MRAB

Veloz wrote:

On Mar 2, 1:18 pm, Raymond Hettinger  wrote:

On Mar 2, 8:29 am, Veloz  wrote:


Hi all
I'm looking for a queue that I can use with multiprocessing, which has
a peek method.
I've seen some discussion about queue.peek but don't see anything in
the docs about it.
Does python have a queue class with peek semantics?

Am curious about your use case?  Why peek at something
that could be gone by the time you want to use it.

  val = q.peek()
  if something_i_want(val):
   v2 = q.get() # this could be different than val

Wouldn't it be better to just get() the value and return if you don't
need it?

  val = q.peek()
  if not something_i_want(val):
  q.put(val)

Raymond


Yeah, I hear you. Perhaps queue is not the best solution. My highest
level use case is this:  The user visits a web page (my app is a
Pylons app) and requests a "report" be created. The report takes too
long to create and display on the spot, so the user expects to visit
some url "later" and see if the specific report has completed, and if
so, have it returned to them.

At a lower level, I'm thinking of using some process workers to create
these reports in the background; there'd be a request queue (into
which requests for reports would go, each with an ID) and a completion
queue, into which the workers would write an entry when a report was
created, along with an ID matching the original request.

The "peek" parts comes in when the user comes back later to see if
their report has done. That is, in my page controller logic, I'd like
to look through the complete queue and see if the specific report has
been finished (I could tell by matching up the ID of the original
request to the ID in the completed queue). If there was an item in the
queue matching the ID, it would be removed.

It's since occurred to me that perhaps a queue is not the best way to
handle the completions. (We're ignoring the file system as a solution
for the time being, and focusing on in-memory structures). I'm
wondering now if a simple array of completed items wouldn't be better.
Of course, all the access to the array would have to be thread/process-
proof.  As you pointed out, for example, multi-part operations such as
"is such-and-such an ID in the list? If so, remove it and return in"
would have to be treated atomically to avoid concurrency issues.

Any thoughts on this design approach are welcomed :-)


A set of completed reports, or a dict with the ID as the key? The
advantage of a dict is that the value could contain several bits of
information, such as when it was completed, the status (OK or failed),
etc. You might want to wrap it in a class with locks (mutexes) to ensure
it's threadsafe.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread Martin P. Hellwig

On 03/02/10 19:44, MRAB wrote:


information, such as when it was completed, the status (OK or failed),
etc. You might want to wrap it in a class with locks (mutexes) to ensure
it's threadsafe.
What actually happens if multiple threads at the same time, write to a 
shared dictionary (Not using the same key)?


I would think that if the hashing part of the dictionary has some sort 
of serialization (please forgive me if I misuse a term) it should 'just 
work'(tm)?


--
mph

--
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread Albert Hopkins
On Tue, 2010-03-02 at 13:25 -0500, Terry Reedy wrote:

> To get help, or report a bug, for something like this, be as specific as 
> possible. 'Linux' may be too generic.

This is on Python on Gentoo Linux x64 with kernel 2.6.33.

> 
> > However, with CPython 3.1 I get:
> >
> >  input = os.fdopen(fd).read()
> >  IOError: [Errno 5] Input/output error
> >
> > Is there something wrong in Python 3.1?  Is this the correct way to do
> > this (run a process in a pseudo-tty and read it's output) or is there
> > another way I should/could be doing this?
> 
> No idea, however, the first thing I would do is call the .fdopen and 
> .read methods separately (on separate lines) to isolate which is raising 
> the error.

The exception occurs on the read() method.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread Albert Hopkins
On Tue, 2010-03-02 at 17:32 +, MRAB wrote:
> The documentation also mentions the 'pty' module. Have you tried that
> instead? 

I haven't but I'll give it a try.  Thanks.

-a


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread Daniel Stutzbach
On Tue, Mar 2, 2010 at 1:58 PM, Martin P. Hellwig <
[email protected]> wrote:

> What actually happens if multiple threads at the same time, write to a
> shared dictionary (Not using the same key)?
>

All of Python's built-in types are thread safe.  Both updates will happen.
--
Daniel Stutzbach, Ph.D.
President, Stutzbach Enterprises, LLC 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: conditional import into global namespace

2010-03-02 Thread mk

Jerry Hill wrote:

Just import subprocess at the top of your module.  If subprocess
hasn't been imported yet, it will be imported when your module is
loaded.  If it's already been imported, your module will use the
cached version that's already been imported.

In other words, it sounds like Python already does what you want.  You
don't need to do anything special.


Oh, thanks!

Hmm it's different than dealing with packages I guess -- IIRC, in 
packages only code in package's __init__.py was executed?


Regards,
mk


--
http://mail.python.org/mailman/listinfo/python-list


Re: freebsd and multiprocessing

2010-03-02 Thread Pop User
On 3/2/2010 12:59 PM, Tim Arnold wrote:
> 
> I'll write some test programs using multiprocessing and see how they
> go before committing to rewrite my current code. I've also been
> looking at 'parallel python' although it may have the same issues.
> http://www.parallelpython.com/
> 

parallelpython works for me on FreeBSD 6.2.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpan for python?

2010-03-02 Thread R Fritz

On 2010-02-28 06:31:56 -0800, [email protected] said:



On Feb 28, 2010, at 9:28 AM, Someone Something wrote:

Is there something like cpan for python? I like python's syntax, but 
Iuse perl because of cpan and the tremendous modules that it has.  --


Please search the mailing list archives.

This subject has been discussed to absolute death.


But somehow the question is not in the FAQ, though the answer is. See:
 



--


Randolph Fritz
 design machine group, architecture department, university of washington
[email protected] -or- [email protected]

--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-02 Thread Dave Angel

Terry Reedy wrote:

On 3/2/2010 11:18 AM, John Posner wrote:

On 3/2/2010 10:19 AM, Roy Smith wrote:


Somewhat sadly, in my case, I can't even machine process the header
file. I don't, strictly speaking, have a header file. What I have is
a PDF which documents what's in the header file, and I'm manually re-
typing the data out of that. Sigh.


There are Python modules to read/write pdf.


Here's an idea, perhaps too obvious, to minimize your keystrokes:

1. Create a text file with the essential data:

XYZ_FOO 0 The foo property
XYZ_BAR 1 The bar property
XYZ_BAZ 2 reserved for future use

2. Use a Python script to convert this into the desired code:

declare('XYZ_FOO', 0, "The foo property")
declare('XYZ_BAR', 1, "The bar property")
declare('XYZ_BAZ', 2, "reserved for future use")

Note:

 >>> s
'XYZ_FOO 0 The foo property'
 >>> s.split(None, 2)
['XYZ_FOO', '0', 'The foo property']


Given that set of triples is constant, I would think about having the 
Python script do the computation just once, instead of with every 
inport. In other words, the script should *call* the declare function 
and then write out the resulting set of dicts either to a .py or 
pickle file.


tjr


There have been lots of good suggestions in this thread.  Let me give 
you my take:


1) you shouldn't want to clutter up the global dictionary of your main 
processing module.  There's too much risk of getting a collision, either 
with the functions you write, or with some builtin.  That's especially 
true if you might later want to use a later version of that pdf file.  
Easiest solution for your purposes, make it a separate module.  Give it 
a name like defines, and in your main module, you use


import defines
print  defines.XYZ_FOO

And if that's too much typing, you can do:
import defines as I
print  I.XYZ_FOO

Next problem is to parse that pdf file.  One solution is to use a pdf 
library.  But another is to copy/paste it into a text file, and parse 
that.   Assuming it'll paste, and that the lines you want are 
recognizable (eg. they all begin as  #define), the parsing should be 
pretty easy.  The results of the parsing is a file  defines.py


Now, if the pdf ever changes, rerun your parsing program.  But don't run 
it every time your application runs.


If the pdf file were changing often, then I'd have a different answer:
2) define an empty class, just as a placeholder, and make one instance I
 Populate a class instance I withsetattrib() calls, but access 
it with direct syntax, same as our first example.



DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Ben Finney
Andreas Waldenburger  writes:

> Don't get me wrong; our whole system is more fragile than I find
> comfortable. But I guess getting 10ish different parties around the
> globe to work in complete unison is quite a feat, and I'm surprised it
> even works as it is. But it does, and I'm glad we don't have to
> micromanage other people's code.

It's rather odd that you think of “require general quality standards,
independently measurable and testable” to be “micromanaging”.

I guess that when even the *customers* will resist implementing such
quality expectations, it's little surprise that the vendors continue to
push out such shoddy work on their customers.

-- 
 \ “Why am I an atheist? I ask you: Why is anybody not an atheist? |
  `\  Everyone starts out being an atheist.” —Andy Rooney, _Boston |
_o__)Globe_ 1982-05-30 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Broken references in postings

2010-03-02 Thread Ben Finney
Grant Edwards  writes:

> Or is it just individual news/mail clients that are broken?

This, I believe. Many clients mess up the References and In-Reply-To
fields, in the face of many years of complaint to the vendors.

Most free-software clients get it right, AFAICT.

-- 
 \  “Contentment is a pearl of great price, and whosoever procures |
  `\it at the expense of ten thousand desires makes a wise and |
_o__)  happy purchase.” —J. Balguy |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an assembler in Python

2010-03-02 Thread Holger Mueller
Giorgos Tzampanakis  wrote:
> I'm implementing a CPU that will run on an FPGA. I want to have a  
> (dead) simple assembler that will generate the machine code for 
> me. I want to use Python for that. Are there any libraries that 
> can help me with the parsing of the assembly code?

Why coding assembler if you can type in hexdumps...

scnr
Holger
-- 
http://www.kati-und-holger.de/holgersblog.php
-- 
http://mail.python.org/mailman/listinfo/python-list


Multiprocessing problem

2010-03-02 Thread Matt Chaput

Hi,

I'm having a problem with the multiprocessing package.

I'm trying to use a simple pattern where a supervisor object starts a 
bunch of worker processes, instantiating them with two queues (a job 
queue for tasks to complete and an results queue for the results). The 
supervisor puts all the jobs in the "job" queue, then join()s the 
workers, and then pulls all the completed results off the "results" queue.


(I don't think I can just use something like Pool.imap_unordered for 
this because the workers need to be objects with state.)


Here's a simplified example:

http://pastie.org/850512

The problem is that seemingly randomly, but almost always, the worker 
processes will deadlock at some point and stop working before they 
complete. This will leave the whole program stalled forever. This seems 
more likely the more work each worker does (to the point where adding 
the time.sleep(0.01) as seen in the example code above guarantees it). 
The problem seems to occur on both Windows and Mac OS X.


I've tried many random variations of the code (e.g. using JoinableQueue, 
calling cancel_join_thread() on one or both queues even though I have no 
idea what it does, etc.) but keep having the problem.


Am I just using multiprocessing wrong? Is this a bug? Any advice?

Thanks,

Matt
--
http://mail.python.org/mailman/listinfo/python-list


CGI, POST, and file uploads

2010-03-02 Thread Mitchell L Model
Can someone tell me how to upload the contents of a (relatively small)  
file using an HTML form and CGI in Python 3.1? As far as I can tell  
from a half-day of experimenting, browsing, and searching the Python  
issue tracker, this is broken.  Very simple example:



  
  
  
http://localhost:9000/cgi/cgi-test.py";
  enctype="multipart/form-data"
  method="post">
File
   
Submit 

  



cgi-test.py:


#!/usr/local/bin/python3
import cgi
import sys
form = cgi.FieldStorage()
print(form.getfirst('contents'), file=sys.stderr)
print('done')


I run a CGI server with:

#!/usr/bin/env python3
from http.server import HTTPServer, CGIHTTPRequestHandler
HTTPServer(('', 9000), CGIHTTPRequestHandler).serve_forever()



What happens is that the upload never stops. It works in 2.6.

If I cancel the upload from the browser, I get the following output,  
so I know that basically things are working;

the cgi script just never finishes reading the POST input:

localhost - - [02/Mar/2010 16:37:36] "POST /cgi/cgi-test.py HTTP/1.1"  
200 -

<<>>

Exception happened during processing of request from ('127.0.0.1',  
55779)

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 281, in _handle_request_noblock

self.process_request(request, client_address)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 307, in process_request

self.finish_request(request, client_address)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 320, in finish_request

self.RequestHandlerClass(request, client_address, self)
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socketserver.py", line 614, in __init__

self.handle()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 352, in handle

self.handle_one_request()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 346, in handle_one_request

method()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 868, in do_POST

self.run_cgi()
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/http/server.py", line 1045, in run_cgi

if not self.rfile.read(1):
  File "/Library/Frameworks/Python.framework/Versions/3.1/lib/ 
python3.1/socket.py", line 214, in readinto

return self._sock.recv_into(b)
socket.error: [Errno 54] Connection reset by peer



--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread mk

Daniel Stutzbach wrote:
On Tue, Mar 2, 2010 at 1:58 PM, Martin P. Hellwig 
mailto:[email protected]>> wrote:


What actually happens if multiple threads at the same time, write to
a shared dictionary (Not using the same key)?



All of Python's built-in types are thread safe.  Both updates will happen.


No need to use synchro primitives like locks?

I know that it may work, but that strikes me as somehow wrong... I'm 
used to using things like Lock().acquire() and Lock().release() when 
accessing shared data structures, whatever they are.


Although trying to do the "right thing" may indeed get one in trouble in 
case of deadlock caused by a bug in one's own program.


Regards,
mk


--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Andreas Waldenburger
On Tue, 02 Mar 2010 19:05:25 +0100 Jean-Michel Pichavant
 wrote:

> Andreas Waldenburger wrote:
> > 
> > I had hoped that everyone just read it, went like "Oh geez.",
> > smiled it off with a hint of lesson learned and got back to
> > whatever it was they were doing. Alas, I was wrong ... and I'm
> > sorry.
> >
> There's something wrong saying that stupid people write working code 
> that totally satisfies your needs. Don't you agree ? ;-)
> 
No, in fact I don't.

It works. They are supposed to make it work. And that's what they do.
Whether or not they put their docstrings in the place they should does
not change that their code works.

Sorry, you guys drained all the funny out of me.

/W

-- 
INVALID? DE!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-02 Thread Eike Welk
John Posner wrote:
> I've updated the text at this location:
> 
>  >   http://cl1p.net/bruno_0301.rst/

I think this is a very useful writeup! 

It would be perfect with a little bit of introduction that says:
1. - What it is: "The rough details of method look-up";
2. - which contains some of the questions that that made that authors write 
the text. This way people with similar questions can find it with Google.

Additionally the link to the relevant section in the Python documentation 
would be great. I can't find it!

A link to an article about the details of class creation and metaclasses 
would be good too.


Thanks for writing this great little text,
Eike.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread John Krukoff
On Tue, 2010-03-02 at 22:54 +0100, mk wrote:

> No need to use synchro primitives like locks?
> 
> I know that it may work, but that strikes me as somehow wrong... I'm 
> used to using things like Lock().acquire() and Lock().release() when 
> accessing shared data structures, whatever they are.


This is one of those places where the GIL is a good thing, and makes
your life simpler. You could consider it that the interpreter does the
locking for you for such primitive operations, if you like.
-- 
John Krukoff 
Land Title Guarantee Company

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Andreas Waldenburger
On Wed, 03 Mar 2010 08:22:40 +1100 Ben Finney
 wrote:

> Andreas Waldenburger  writes:
> 
> > Don't get me wrong; our whole system is more fragile than I find
> > comfortable. But I guess getting 10ish different parties around the
> > globe to work in complete unison is quite a feat, and I'm surprised
> > it even works as it is. But it does, and I'm glad we don't have to
> > micromanage other people's code.
> 
> It's rather odd that you think of “require general quality standards,
> independently measurable and testable” to be “micromanaging”.
> 
I should know better than to argue these things, but I don't. Hmph.

We demand testable quality standards, but not of their code. We demand
it of their software. We say *what* we want, they decide *how* they'll
do it. Noncompliance will be fined, by a contractually agreed
amount. Everything beyond that is micromanaging and detracts workforce
from the stuff *we* have to do.

We are in exactly the same kind of bond with a company that buys our
system (and support). I have yet to see any one of them demand to see
how we write our code. Why should they care? (Rhetorical question, I
refuse to discuss this any further.)


> I guess that when even the *customers* will resist implementing such
> quality expectations, it's little surprise that the vendors continue
> to push out such shoddy work on their customers.
> 
When I'm building bicycles I can go to the trouble of going by what
method of galvanization my tires are produced. Or I save myself the
trouble and just take the best offer and hold them responsible when
they don't deliver on their promise. Both possible, both work, and both
appropriate in certain situations.

You can keep discussing if you want, I've said more than I was hoping
to.

/W


-- 
INVALID? DE!

-- 
http://mail.python.org/mailman/listinfo/python-list


Verace Hospitality Late Night Dinner

2010-03-02 Thread Sugar Dining Den and Social Club
Title: NYCLUBINFO INC 








  



  Unsubscribe | Complain | Edit Profile | Confirm
  
  31 Kimberly Drive East Northport NY 11731



  


  




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpan for python?

2010-03-02 Thread TomF

On 2010-03-02 13:14:50 -0800, R Fritz  said:


On 2010-02-28 06:31:56 -0800, [email protected] said:


On Feb 28, 2010, at 9:28 AM, Someone Something wrote:

Is there something like cpan for python? I like python's syntax, but 
Iuse perl because of cpan and the tremendous modules that it has.  --


Please search the mailing list archives.

This subject has been discussed to absolute death.


But somehow the question is not in the FAQ, though the answer is. See:
  



There 


is also a program called cpan, distributed with Perl.  It is used for 
searching, downloading, installing and testing modules from the CPAN 
repository.  It's far more extensive than setuptools.  AFAIK the python 
community has developed nothing like it.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Albert van der Horst
In article , Mel   wrote:
>Jean-Michel Pichavant wrote:
>> Andreas Waldenburger wrote:
>>> On Fri, 26 Feb 2010 09:09:36 -0600 Tim Daneliuk 
>>> wrote:
 Reminiscent of:
 mov  AX,BX   ; Move the contents of BX into AX
>
>>> Well, there might be some confusion there as to what gets moved where,
>>> wouldn't you say? I guess this goes away after a couple of months,
>>> though.
>
>> I agree to that statement, I was surprised that mov AX,BX assumes that
>> BX is the source, and AX the destination. I never programmed in
>> assembler though.
>
>You could think of it as a not bad use of the design principle "Clear The
>Simple Stuff Out Of The Way First".  Destinations are commonly a lot simpler
>than sources -- just as in Python assignment statements.  So you can tell
>more or less at a glance what's going to be changed, then get into the deep
>analysis to find what it's going to be changed to.

The real background is that a very long time ago at Intel
the first guy that wrote an assembler, got it "wrong", i.e.
violated the conventions established already at the time.

No nothing clever, nothing conscious, just reinventing the wheel
badly.

Next time you tell me that the MSDOS "file" system was well thought
out :-)

>   Mel.

Groetjes Albert

--
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
alb...@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Broken references in postings

2010-03-02 Thread Aahz
In article ,
Grant Edwards   wrote:
>
>I've noticed recently that a lot of the "refernces" and
>"in-reply-to" headers in c.l.p are broken, resulting in the
>inability to move from a child to a parent in a tree.

One issue with the mail/news gateway is that (unless it's been fixed)
In-Reply-To: does not get copied to References: (which is what most
newsreaders need to thread properly).
-- 
Aahz ([email protected])   <*> http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is this secure?

2010-03-02 Thread Aahz
In article ,
Robert Kern   wrote:
>On 2010-02-28 01:28 AM, Aahz wrote:
>> In article,
>> Robert Kern  wrote:
>>>
>>> If you are storing the password instead of making your user remember
>>> it, most platforms have some kind of keychain secure password
>>> storage. I recommend reading up on the APIs available on your targeted
>>> platforms.
>>
>> Are you sure?  I haven't done a lot of research, but my impression was
>> that Windows didn't have anything built in.
>
>You're right, not built-in, but Windows does provide enough crypto
>services for a cross-platform Python implementation to be built:
>
>   http://pypi.python.org/pypi/keyring

Thanks you!  That's a big help!
-- 
Aahz ([email protected])   <*> http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Aahz
In article <[email protected]>,
Andreas Waldenburger   wrote:
>
>Sorry, you guys drained all the funny out of me.

Don't let a few nitpickers do that!  I thought it was funny; after that,
just remember that every Usenet thread drifts away from *your* point.
-- 
Aahz ([email protected])   <*> http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Grant Edwards
On 2010-03-02, Albert van der Horst  wrote:

> No nothing clever, nothing conscious, just reinventing the wheel
> badly.
>
> Next time you tell me that the MSDOS "file" system was well thought
> out :-)

Just a mediocre copy of the CP/M filesystem, which was in turn
copied from DEC's RSTS or RSX.

-- 
Grant Edwards   grant.b.edwardsYow! Kids, don't gross me
  at   off ... "Adventures with
  gmail.comMENTAL HYGIENE" can be
   carried too FAR!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Email Script

2010-03-02 Thread Steve Holden
Victor Subervi wrote:
> On Tue, Mar 2, 2010 at 11:48 AM, Victor Subervi  This sends only the first of the two emails. Why doesn't it work to
> send the second? What do?
> TIA,
> beno
> 
> 
> Should I put a timer between instances of Email? 
> 
> 
Np.

http://en.wikipedia.org/wiki/Cargo_cult_programming

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing problem

2010-03-02 Thread Matt Chaput

On 3/2/2010 3:59 PM, Matt Chaput wrote:
> I'm trying to use a simple pattern where a supervisor object starts a
> bunch of worker processes, instantiating them with two queues (a job
> queue for tasks to complete and an results queue for the results). The
> supervisor puts all the jobs in the "job" queue, then join()s the
> workers, and then pulls all the completed results off the "results" 
queue.


> Here's a simplified example:
>
> http://pastie.org/850512

I should mention that if I change my code so the workers just pull 
things off the job queue but don't put any results on the result queue 
until after they see the None sentinel in the job queue and break out of 
the loop, I don't get the deadlock. So it's something about getting from 
one queue and putting to another queue in close proximity.


Hopefully I'm making a simple mistake with how I'm using the library and 
it'll be easy to fix...


Thanks,

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Steven D'Aprano
On Tue, 02 Mar 2010 22:51:56 +0100, Andreas Waldenburger wrote:

> On Tue, 02 Mar 2010 19:05:25 +0100 Jean-Michel Pichavant
>  wrote:
> 
>> Andreas Waldenburger wrote:
>> > 
>> > I had hoped that everyone just read it, went like "Oh geez.", smiled
>> > it off with a hint of lesson learned and got back to whatever it was
>> > they were doing. Alas, I was wrong ... and I'm sorry.
>> >
>> There's something wrong saying that stupid people write working code
>> that totally satisfies your needs. Don't you agree ? ;-)
>> 
> No, in fact I don't.
> 
> It works. They are supposed to make it work. And that's what they do.
> Whether or not they put their docstrings in the place they should does
> not change that their code works.
> 
> Sorry, you guys drained all the funny out of me.

Most of the customers I've worked for have insisted we follow best 
practices. Sometimes they even invent their own best practices that 
nobody has even heard of, and that's fun (not). You're the first one I've 
ever met that bitches publicly that your contractors don't follow best 
practice, but objects strenuously to the idea that you are right to care 
about that they don't.

Wow. Just... wow.

P.S. next time you want some not-quite-best-practice Python code written, 
send me an email off-list. I'll be more than happy to do not-quite-best-
practice work for you. 

*wink*



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Steven D'Aprano
On Tue, 02 Mar 2010 23:19:09 +0100, Andreas Waldenburger wrote:

> We demand testable quality standards, but not of their code. We demand
> it of their software. We say *what* we want, they decide *how* they'll
> do it. Noncompliance will be fined, by a contractually agreed amount.
> Everything beyond that is micromanaging and detracts workforce from the
> stuff *we* have to do.

You specify the Functional Requirements but not the Design Requirements. 
Fair enough.

 
> We are in exactly the same kind of bond with a company that buys our
> system (and support). I have yet to see any one of them demand to see
> how we write our code. Why should they care? (Rhetorical question, I
> refuse to discuss this any further.)

It is true that most people don't care how code is written. But they 
*should* care, because how it is written directly impacts the quality of 
the code. Saying "I don't care how it is written" is precisely the same 
as saying "I don't care how reliable, secure or efficient the code is".

Of course people do this. People also inhale carcinogenic chemicals, vote 
bad laws into place, drive too fast, ingest noxious chemicals, and spend 
hours on Usenet debating the number of angels that can dance on the head 
of a pin.


>> I guess that when even the *customers* will resist implementing such
>> quality expectations, it's little surprise that the vendors continue to
>> push out such shoddy work on their customers.
>> 
> When I'm building bicycles I can go to the trouble of going by what
> method of galvanization my tires are produced. Or I save myself the
> trouble and just take the best offer and hold them responsible when they
> don't deliver on their promise. Both possible, both work, and both
> appropriate in certain situations.

Many years ago, I assisted a professional building architect design a 
software system for specifying the requirements of major architectural 
works such as bridges and high-rise buildings. They specify *everything*, 
right down to the type of sand used in the concrete and the grade of 
steel used for the frame. When using the wrong type of sand could mean 
that the bridge collapses in 35 years, you soon learn that, yes, you damn 
well better care.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Steven D'Aprano
On Tue, 02 Mar 2010 11:30:32 -0800, Patrick Maupin wrote:

> On Mar 2, 11:59 am, Terry Reedy  wrote:
> 
>> To me, comparing object notation with programming language is not
>> helpful to the OP's purpose.
> 
> Yes, I agree, it was a distraction.  I fell into the trap of responding
> to the ludicrous claim that "if X is a superset of Y, then X cannot
> possibly look better than Y" (a claim made by multiple people all
> thinking it was clever) by showing that Y has other supersets that do in
> fact look better than Y.

It's not ludicrous.

You claim that:

(1) JSON is too hard to edit;

(2) RSON is a superset of JSON (the PEP even explicitly says "All valid 
UTF-8 encoded JSON files are also valid RSON files");

(3) which implies that all JSON files are valid RSON files.

If you reject the logical conclusion that RSON must therefore also be too 
hard to edit, then perhaps JSON isn't too hard to edit either.

You seem to be taking the position that if you start with a config file 
config.json, it is "too hard to edit", but then by renaming it to 
config.rson it magically becomes easier to edit. That *is* ludicrous.

Perhaps what you mean to say is that JSON *can be* (not is) too hard to 
edit, and RSON *can be* too hard to edit too, but RSON has additional 
benefits, including being easier to edit *sometimes*.

So far you have done (in my opinion) a really poor job of explaining what 
those benefits are. You've bad-mouthed existing config formats, then 
tried to convince us that RSON is almost exactly the same as one of those 
formats apart from a couple of trivial changes of spelling (True for 
true, etc.).

In my opinion, if you're going to get any traction with RSON, you need to 
demonstrate some examples of where JSON actually is hard to write, and 
show how RSON makes it easier. It's not good enough showing badly written 
JSON, it has to be examples that can't be written less badly given the 
constraints of JSON.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-02 Thread Ben Finney
Andreas Waldenburger  writes:

> It works. They are supposed to make it work. And that's what they do.
> Whether or not they put their docstrings in the place they should does
> not change that their code works.

No-one has been denying that.

What the quality of their source code *does* affect, though, is its
maintainability over time – especially in the inevitable event that the
relationship with you as their customer comes to an end.

The arguments I've seen here in this sub-thread have been in favour of
customers demanding that the code meets functional requirements *and*
soruce code quality requirements.

Just as customers should demand both that a building be built to do its
job well, *and* that its architectural plans meet measurable, testable
industry standards of quality for independent re-use at some
indeterminate later date.

If we don't demand such things as customers of program developers, we
deserve what programs we get.

-- 
 \   “Special today: no ice cream.” —mountain inn, Switzerland |
  `\   |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread Albert Hopkins
On Tue, 2010-03-02 at 17:32 +, MRAB wrote:
> The documentation also mentions the 'pty' module. Have you tried that
> instead? 

I tried to use pty.fork() but it also produces the same error.

I also tried passing 'r', and 'rb' to fdopen() but it didn't make any
difference.

-a


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.fdopen() issue in Python 3.1?

2010-03-02 Thread Albert Hopkins
This appears to be Issue 5380[1] which is still open.  I've cc'ed myself
to that issue.

[1] http://bugs.python.org/issue5380



-- 
http://mail.python.org/mailman/listinfo/python-list


Image.frombuffer and warning

2010-03-02 Thread News123
Hi,

I am using the PIL function from_buffer in python 2.6.4

I am having the line
im2 = Image.frombuffer('L',(wx,wy),buf)


I receive the warning:
> ./pytest.py:63: RuntimeWarning: the frombuffer defaults may change in
a future release; for portability, change the call to read:
>   frombuffer(mode, size, data, 'raw', mode, 0, 1)
>   im2 = Image.frombuffer('L',(wx,wy),buf)


Naively I assumed, that changing my code to


im2 = Image.frombuffer('L',(wx,wy),buf,'raw','L',0,1)


should fix the issue:

However I receive exactly the same error as before.

What am I doing wrong?


thanks a lot in advance and bye


N
-- 
http://mail.python.org/mailman/listinfo/python-list


python 2.6: how to modify a PIL image from C without copying forth and back

2010-03-02 Thread News123
Hi,

I created a grayscale image with PIL.

Now I would like to write a C function, which reads a;most all pixels
and will modify a few of them.


My current approach is:
- transform the image to a string()
- create a byte array huge enough to contain the resulting image
- call my c_function, which copies over the entire image in order
   to modify a few pixels
How can I achieve this with the least amount of copies?


## Python code snippet 

im = Image.open('grayscalefile_onebyteperpixel')
wx,wy = im.size

# create a string in order to pass it to my function
# I'm afraid this involves copying so  want to get rid of it
im_as_string = im.tostring()

# create a byte array in order to store my result
new_img_array = array.array('B', [0]*(wx*wy) )


# my function which should just change some pixels has
# to copy all unmodified pixels
my_extension.my_func(wx,wy,im_as_string,new_img_array)


im = Image.frombuffer('L',(wx,wy),new_img_array)
show(im2)

 C wrapper code snippet ###

int wx,wy;
Py_buffer img;
Py_buffer new_img;
Py_buffer table;
unsigned char *img_ptr;
unsigned char *new_img_ptr;

int ok = PyArg_ParseTuple(args,
"iis*w*",&wx,&wy,&img,&new_img);
img_ptr = (unsigned char *) img.buf;
new_img_ptr = (unsigned char *) new_img.buf;
my_func(wx,wy,img_ptr,new_img_ptr);


Thanks in advance for any suggestions to make this more efficient.


bye


N

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-02 Thread Paul Rubin
Steven D'Aprano  writes:
> (3) which implies that all JSON files are valid RSON files.
>
> If you reject the logical conclusion that RSON must therefore also be too 
> hard to edit, then perhaps JSON isn't too hard to edit either.

I would say that JSON is hard to edit because, among other things, it
has no comment syntax.  It's quite difficult to maintain a hand-edited
JSON file, or figure out what to edit into it, if it can't have any
comments in it describing what it's doing.  JSON is a serialization
format for machine to machine communication, not intended for hand
editing.  Simply adding a comment syntax to JSON would go a long way
towards making it easier to edit.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing problem

2010-03-02 Thread MRAB

Matt Chaput wrote:

Hi,

I'm having a problem with the multiprocessing package.

I'm trying to use a simple pattern where a supervisor object starts a 
bunch of worker processes, instantiating them with two queues (a job 
queue for tasks to complete and an results queue for the results). The 
supervisor puts all the jobs in the "job" queue, then join()s the 
workers, and then pulls all the completed results off the "results" queue.


(I don't think I can just use something like Pool.imap_unordered for 
this because the workers need to be objects with state.)


Here's a simplified example:

http://pastie.org/850512

The problem is that seemingly randomly, but almost always, the worker 
processes will deadlock at some point and stop working before they 
complete. This will leave the whole program stalled forever. This seems 
more likely the more work each worker does (to the point where adding 
the time.sleep(0.01) as seen in the example code above guarantees it). 
The problem seems to occur on both Windows and Mac OS X.


I've tried many random variations of the code (e.g. using JoinableQueue, 
calling cancel_join_thread() on one or both queues even though I have no 
idea what it does, etc.) but keep having the problem.


Am I just using multiprocessing wrong? Is this a bug? Any advice?


There's a difference between multithreading and multiprocessing.

In multithreading the threads share the same address space, so objects
can be passed between the threads simply by passing references to those
objects.

In multiprocessing, however, the process don't share an address space,
so the objects themselves need to be transferred between the processes
via pipes, but the pipes have a limited capacity.

If the main process doesn't get the results from the queue until the
worker processes terminate, and the worker processes don't terminate
until they've put their results in the queue, and the pipe consequently
fills up, then deadlock can result.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-02 Thread MRAB

John Krukoff wrote:

On Tue, 2010-03-02 at 22:54 +0100, mk wrote:


No need to use synchro primitives like locks?

I know that it may work, but that strikes me as somehow wrong... I'm 
used to using things like Lock().acquire() and Lock().release() when 
accessing shared data structures, whatever they are.



This is one of those places where the GIL is a good thing, and makes
your life simpler. You could consider it that the interpreter does the
locking for you for such primitive operations, if you like.


I suppose it depends on the complexity of the data structure. A dict's
methods are threadsafe, for example, but if you have a data structure
where access leads to multiple method calls then collectively they need
a lock.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing problem

2010-03-02 Thread Matt Chaput

If the main process doesn't get the results from the queue until the
worker processes terminate, and the worker processes don't terminate
until they've put their results in the queue, and the pipe consequently
fills up, then deadlock can result.


The queue never fills up... on platforms with qsize() I can see this. I 
remove items from the results queue as I add to the job queue, and if I 
add timeouts everywhere the workers never raise Empty and the supervisor 
never raises Full. They just deadlock.


I've rewritten the code so the worker threads don't push information 
back while they run, they just write to a temporary file which the 
supervisor can read, which avoids the issue. But if anyone can tell me 
what I was doing wrong for future reference, I'd greatly appreciate it.


Thanks,

Matt
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >