Re: Getting Local MAC Address

2010-04-07 Thread Rebelo

Lawrence D'Oliveiro wrote:

In message
, Booter 
wrote:



I am new to python ans was wondering if there was a way to get the mac
address from the local NIC?


What if you have more than one?



you can try with netifaces :
http://pypi.python.org/pypi/netifaces/0.3
I use them on both Windows and Linux
--
http://mail.python.org/mailman/listinfo/python-list


Re: (a==b) ? 'Yes' : 'No'

2010-04-07 Thread Duncan Booth
Steven D'Aprano  wrote:

> On Tue, 06 Apr 2010 16:54:18 +, Duncan Booth wrote:
> 
>> Albert van der Horst  wrote:
>> 
>>> Old hands would have ...
>>> stamp =( weight>=1000 and  120 or
>>>  weight>=500  and  100 or
>>>  weight>=250  and  80  or
>>>  weight>=100  and  60  or
>>>44  )
>>> 
>>> (Kind of a brain twister, I think, inferior to C, once the c-construct
>>> is accepted as idiomatic.)
>> 
>> I doubt many old hands would try to join multiple and/or operators that
>> way. Most old hands would (IMHO) write the if statements out in full,
>> though some might remember that Python comes 'batteries included':
>> 
>>  from bisect import bisect
>>  WEIGHTS = [100, 250, 500, 1000]
>>  STAMPS = [44, 60, 80, 100, 120]
>> 
>>  ...
>>  stamp = STAMPS[bisect(WEIGHTS,weight)]
> 
> 
> Isn't that an awfully heavyweight and obfuscated solution for choosing 
> between five options? Fifty-five options, absolutely, but five?
> 
I did say most people would simply write out an if statement.

However, since you ask, using bisect here allows you to separate the data 
from the code and even with only 5 values that may be worthwhile. 
Especially if there's any risk it could become 6 next week.


-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Bruno Desthuilliers

Lie Ryan a écrit :
(snip)


Since in function in python is a first-class object, you can instead do
something like:

def process(document):
# note: document should encapsulate its own logic
document.do_one_thing()


Obvious case of encapsulation abuse here. Should a file object 
encapsulate all the csv parsing logic ? (and the html parsing, xml 
parsing, image manipulation etc...) ? Should a "model" object 
encapsulate the presentation logic ? I could go on for hours here...




and I think for your purpose, the mixin pattern could cleanly separate
manipulation and document while still obeying object-oriented pattern
that document is self-sufficient:

# language with only single-inheritance can only dream to do this

>

class Appendable(object):
def append(self, text):
self.text += text
class Savable(object):
def save(self, fileobj):
fileobj.write(self.text)
class Openable(object):
def open(self, fileobj):
self.text = fileobj.read()
class Document(Appendable, Savable, Openable):
def __init__(self):
self.text = ''


Anyone having enough experience with Zope2 knows why this sucks big time.
--
http://mail.python.org/mailman/listinfo/python-list


Python and Regular Expressions

2010-04-07 Thread Richard Lamboj

Hello,

i want to parse this String:

version 3.5.1 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
}

version 3.2.14 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
} 

Step 1:

version 3.2.14 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
} 

Step 2:
service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
Step 3:
$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

Step 4:
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid

My Regular Expressions:
version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}

I think it was no good Solution. I'am trying with Groups:
(service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
but this part makes Problems: ${bin_dir}

Kind Regards

Richi
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: plotting in python 3

2010-04-07 Thread [email protected]
On Apr 6, 11:52 pm, Rolf Camps  wrote:
> Op dinsdag 06-04-2010 om 14:55 uur [tijdzone -0500], schreef Christopher
> Choi:

> It was after the homework I asked my question. All plot solutions i
> found where for python2.x. gnuplot_py states on its homepage you need a
> 'working copy of numpy'. I don't think numpy is ported to python 3.x. Or
> is it?

Google charts could be quick and dirty solution -- 
http://pygooglechart.slowchop.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Chris Rebert
On Wed, Apr 7, 2010 at 1:37 AM, Richard Lamboj  wrote:
> i want to parse this String:
>
> version 3.5.1 {
>
>        $pid_dir = /opt/samba-3.5.1/var/locks/
>        $bin_dir = /opt/samba-3.5.1/bin/
>
>        service smbd {
>                bin = ${bin_dir}smbd -D
>                pid = ${pid_dir}smbd.pid
>        }
>        service nmbd {
>                bin = ${bin_dir}nmbd -D
>                pid = ${pid_dir}nmbd.pid
>        }
>        service winbindd {
>                bin = ${bin_dir}winbindd -D
>                pid = ${pid_dir}winbindd.pid
>        }
> }
>
> version 3.2.14 {
>
>        $pid_dir = /opt/samba-3.5.1/var/locks/
>        $bin_dir = /opt/samba-3.5.1/bin/
>
>        service smbd {
>                bin = ${bin_dir}smbd -D
>                pid = ${pid_dir}smbd.pid
>        }
>        service nmbd {
>                bin = ${bin_dir}nmbd -D
>                pid = ${pid_dir}nmbd.pid
>        }
>        service winbindd {
>                bin = ${bin_dir}winbindd -D
>                pid = ${pid_dir}winbindd.pid
>        }
> }
>
> Step 1:
>
> version 3.2.14 {
>
>        $pid_dir = /opt/samba-3.5.1/var/locks/
>        $bin_dir = /opt/samba-3.5.1/bin/
>
>        service smbd {
>                bin = ${bin_dir}smbd -D
>                pid = ${pid_dir}smbd.pid
>        }
>        service nmbd {
>                bin = ${bin_dir}nmbd -D
>                pid = ${pid_dir}nmbd.pid
>        }
>        service winbindd {
>                bin = ${bin_dir}winbindd -D
>                pid = ${pid_dir}winbindd.pid
>        }
> }
>
> Step 2:
>        service smbd {
>                bin = ${bin_dir}smbd -D
>                pid = ${pid_dir}smbd.pid
>        }
> Step 3:
>        $pid_dir = /opt/samba-3.5.1/var/locks/
>        $bin_dir = /opt/samba-3.5.1/bin/
>
> Step 4:
>                bin = ${bin_dir}smbd -D
>                pid = ${pid_dir}smbd.pid
>
> My Regular Expressions:
> version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
> service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}
>
> I think it was no good Solution. I'am trying with Groups:
> (service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
> but this part makes Problems: ${bin_dir}

Regular expressions != Parsers

Every time someone tries to parse nested structures using regular
expressions, Jamie Zawinski kills a puppy.

Try using an *actual* parser, such as Pyparsing:
http://pyparsing.wikispaces.com/

Cheers,
Chris
--
Some people, when confronted with a problem, think:
"I know, I'll use regular expressions." Now they have two problems.
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: converting a timezone-less datetime to seconds since the epoch

2010-04-07 Thread Chris Withers

Hi Chris,

Chris Rebert wrote:

from calendar import timegm

def timestamp(dttm):
return timegm(dttm.utctimetuple())
#the *utc*timetuple change is just for extra consistency
#it shouldn't actually make a difference here

And problem solved. As for what the problem was:

Paraphrasing the table I got added to the time module docs:
(http://docs.python.org/library/time.html)


That table is not obvious :-/
Could likely do with its own section...


To convert from struct_time in ***UTC***
to seconds since the epoch
use calendar.timegm()


...and really, wtf is timegm doing in calendar rather than in time? ;-)


I'd be *more* interested in knowing either why the timestamp function or the
tests are wrong and how to correct them...


You used a function intended for local times on UTC time data, and
therefore got incorrect results.


Thanks for the info, I don't think I'd ever have gotten to the bottom of 
this on my own! :-)


Chris

--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Bruno Desthuilliers

Richard Lamboj a écrit :

Hello,

i want to parse this String:

version 3.5.1 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
}


(snip)

I think you'd be better writing a specific parser here. Paul McGuire's 
PyParsing package might help:


http://pyparsing.wikispaces.com/

My 2 cents.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Richard Lamboj
Am Wednesday 07 April 2010 10:52:14 schrieb Chris Rebert:
> On Wed, Apr 7, 2010 at 1:37 AM, Richard Lamboj  
wrote:
> > i want to parse this String:
> >
> > version 3.5.1 {
> >
> >        $pid_dir = /opt/samba-3.5.1/var/locks/
> >        $bin_dir = /opt/samba-3.5.1/bin/
> >
> >        service smbd {
> >                bin = ${bin_dir}smbd -D
> >                pid = ${pid_dir}smbd.pid
> >        }
> >        service nmbd {
> >                bin = ${bin_dir}nmbd -D
> >                pid = ${pid_dir}nmbd.pid
> >        }
> >        service winbindd {
> >                bin = ${bin_dir}winbindd -D
> >                pid = ${pid_dir}winbindd.pid
> >        }
> > }
> >
> > version 3.2.14 {
> >
> >        $pid_dir = /opt/samba-3.5.1/var/locks/
> >        $bin_dir = /opt/samba-3.5.1/bin/
> >
> >        service smbd {
> >                bin = ${bin_dir}smbd -D
> >                pid = ${pid_dir}smbd.pid
> >        }
> >        service nmbd {
> >                bin = ${bin_dir}nmbd -D
> >                pid = ${pid_dir}nmbd.pid
> >        }
> >        service winbindd {
> >                bin = ${bin_dir}winbindd -D
> >                pid = ${pid_dir}winbindd.pid
> >        }
> > }
> >
> > Step 1:
> >
> > version 3.2.14 {
> >
> >        $pid_dir = /opt/samba-3.5.1/var/locks/
> >        $bin_dir = /opt/samba-3.5.1/bin/
> >
> >        service smbd {
> >                bin = ${bin_dir}smbd -D
> >                pid = ${pid_dir}smbd.pid
> >        }
> >        service nmbd {
> >                bin = ${bin_dir}nmbd -D
> >                pid = ${pid_dir}nmbd.pid
> >        }
> >        service winbindd {
> >                bin = ${bin_dir}winbindd -D
> >                pid = ${pid_dir}winbindd.pid
> >        }
> > }
> >
> > Step 2:
> >        service smbd {
> >                bin = ${bin_dir}smbd -D
> >                pid = ${pid_dir}smbd.pid
> >        }
> > Step 3:
> >        $pid_dir = /opt/samba-3.5.1/var/locks/
> >        $bin_dir = /opt/samba-3.5.1/bin/
> >
> > Step 4:
> >                bin = ${bin_dir}smbd -D
> >                pid = ${pid_dir}smbd.pid
> >
> > My Regular Expressions:
> > version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
> > service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}
> >
> > I think it was no good Solution. I'am trying with Groups:
> > (service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
> > but this part makes Problems: ${bin_dir}
>
> Regular expressions != Parsers
>
> Every time someone tries to parse nested structures using regular
> expressions, Jamie Zawinski kills a puppy.
>
> Try using an *actual* parser, such as Pyparsing:
> http://pyparsing.wikispaces.com/
>
> Cheers,
> Chris
> --
> Some people, when confronted with a problem, think:
> "I know, I'll use regular expressions." Now they have two problems.
> http://blog.rebertia.com

Well, after some trying with regex, your both right. I will use pyparse it 
seems to be the better solution.

Kind Regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Q about assignment and references

2010-04-07 Thread jdbosmaus
Thanks to all for the informative answers.
You made me realize this is a wxPython issue. I have to say, wxPython
seems useful, and I'm glad it is available - but it doesn't have the
gentlest of learning curves.
-- 
http://mail.python.org/mailman/listinfo/python-list


PyCon Australia Call For Proposals

2010-04-07 Thread Richard Jones
Hi everyone,

I'm happy to announce that on the 26th and 27th of June we are running PyCon
Australia in Sydney!

 http://pycon-au.org/

We are looking for proposals for Talks on all aspects of Python programming
from novice to advanced levels; applications and frameworks, or how you
have been involved in introducing Python into your organisation.

We welcome first-time speakers; we are a community conference and we are
eager to hear about your experience. If you have friends or colleagues
who have something valuable to contribute, twist their arms to tell us
about it! Please also forward this Call for Proposals to anyone that you
feel may be interested.

To find out more go to the official Call for Proposals page here:

  http://pycon-au.org/2010/conference/proposals/

The deadline for proposal submission is the 29th of April. Proposal
acceptance will be announced on the 12th of May.


See you in Sydney in June!

Richard Jones
PyCon AU Program Chair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: staticmethod and namespaces

2010-04-07 Thread Дамјан Георгиевски


> Having an odd problem that I solved, but wondering if its the best
> solution (seems like a bit of a hack).
> 
> First off, I'm using an external DLL that requires static callbacks,
> but because of this, I'm losing instance info. It could be import
> related? It will make more sense after I diagram it:

> -
> So basically I added a list of instances to the base class so I can
> get at them from the staticmethod.

Have you tried using a closure, something like this:

class A:
def call(self, args):
def callback(a, b): # normal function
# but I can access self here too
call_the_dll_function(callback, args1, args2...)


> What's bothering me the most is I can't use the global app instance in
> the A.py module.
> 
> How can I get at the app instance (currently I'm storing that along
> with the class instance in the constructor)?
> Is there another way to do this that's not such a hack?
> 
> Sorry for the double / partial post :(

-- 
дамјан ((( http://damjan.softver.org.mk/ )))

Q: What's tiny and yellow and very, very, dangerous?
A: A canary with the super-user password.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: converting a timezone-less datetime to seconds since the epoch

2010-04-07 Thread Floris Bruynooghe
On Apr 7, 9:57 am, Chris Withers  wrote:
> Chris Rebert wrote:
> > To convert from struct_time in ***UTC***
> > to seconds since the epoch
> > use calendar.timegm()
>
> ...and really, wtf is timegm doing in calendar rather than in time? ;-)

You're not alone in finding this strange: http://bugs.python.org/issue6280

(the short apologetic reason is that timegm is written in python
rather the C)

Regards
Floris
-- 
http://mail.python.org/mailman/listinfo/python-list


"jobs in california" "jobs in california los angeles" "jobs in california for uk residents" "jobs in california san diego" "jobs in california orange county" "jobs in california for british" http

2010-04-07 Thread Naeem
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british"
http://jobsincalifornia-usa.blogspot.com/ "jobs in
california" "jobs in california los angeles" "jobs in california for
uk residents" "jobs in california san diego" "jobs in california
orange county" "jobs in california for british"   
http://jobsincalifornia-usa.blogspot.com/
"jobs in california" "jobs in california los angeles" "jobs in
california for uk residents" "jobs in california san diego" "jobs in
california orange county" "jobs in california for british

Re: Impersonating a Different Logon

2010-04-07 Thread Kevin Holleran
On Tue, Apr 6, 2010 at 4:11 PM, Tim Golden  wrote:
> On 06/04/2010 20:26, Kevin Holleran wrote:
>>
>> Hello,
>>
>> I am sweeping some of our networks to find devices.  When I find a
>> device I try to connect to the registry using _winreg and then query a
>> specific key that I am interested in.  This works great for machines
>> that are on our domain, but there are left over machines that are
>> stand alone and the credentials fail.  I understand you cannot pass in
>> credentials with _winreg but is there a way to simulate a logon of
>> another user (the machine's local admin) to query the registry?
>
> The simplest may well be to use WMI (example from here):
>
> http://timgolden.me.uk/python/wmi/cookbook.html#list-registry-keys
>
> 
> import wmi
>
> reg = wmi.WMI (
>  "machine",
>  user="machine\admin",
>  password="Secret",
>  namespace="DEFAULT"
> ).StdRegProv
>
> result, names = reg.EnumKey (
>  hDefKey=_winreg.HKEY_LOCAL_MACHINE,
>  sSubKeyName="Software"
> )
> for name in names:
>  print name
>
> 
>
> I can't try it out at the moment but in principle it should work.
>
> TJG
> --
> http://mail.python.org/mailman/listinfo/python-list
>


Thanks, I was able to connect to the remote machine.  However, how do
I query for a very specific key value?  I have to scan hundreds of
machines and need want to reduce what I am querying.  I would like to
be able to scan a very specific key and report on its value.

With _winreg I could just do:
keyPath = _winreg.ConnectRegistry(r"\\" + ip_a,_winreg.HKEY_LOCAL_MACHINE)
try:
  hKey = _winreg.OpenKey (keyPath,
r"SYSTEM\CurrentControlSet\services\Tcpip\Parameters", 0,
_winreg.KEY_READ)
  value,type = _winreg.QueryValueEx(hKey,"Domain")

Also, is there a performance hit with WMI where perhaps I want to try
to connect with the inherited credentials using _winreg first and then
use the MWI if that fails?

Thanks for your help!
Kevin
-- 
http://mail.python.org/mailman/listinfo/python-list


Striving for PEP-8 compliance

2010-04-07 Thread Tom Evans
[ Please keep me cc'ed, I'm not subscribed ]

Hi all

I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.

My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.

Is there any way to do something semantically the same as this with python?

Cheers

Tom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread geremy condra
On Wed, Apr 7, 2010 at 10:53 AM, Tom Evans  wrote:
> [ Please keep me cc'ed, I'm not subscribed ]
>
> Hi all
>
> I've written a bunch of internal libraries for my company, and they
> all use two space indents, and I'd like to be more consistent and
> conform to PEP-8 as much as I can.
>
> My problem is I would like to be certain that any changes do not alter
> the logic of the libraries. When doing this in C, I would simply
> compile each module to an object file, calculate the MD5 of the object
> file, then make the whitespace changes, recompile the object file and
> compare the checksums. If the checksums match, then the files are
> equivalent.
>
> Is there any way to do something semantically the same as this with python?

Probably the logical thing would be to run your test suite against
it, but assuming that's not an option, you could run the whole
thing through dis and check that the bytecode is identical. There's
probably an easier way to do this though.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: imports again

2010-04-07 Thread Gabriel Genellina

En Tue, 06 Apr 2010 14:25:38 -0300, Alex Hall  escribió:


Sorry this is a forward (long story involving a braille notetaker's
bad copy/paste and GMail's annoying mobile site). Basically, I am
getting errors when I run the project at
http://www.gateway2somewhere.com/sw.zip


Error 404

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Impersonating a Different Logon

2010-04-07 Thread Tim Golden

On 07/04/2010 14:57, Kevin Holleran wrote:

Thanks, I was able to connect to the remote machine.  However, how do
I query for a very specific key value?  I have to scan hundreds of
machines and need want to reduce what I am querying.  I would like to
be able to scan a very specific key and report on its value.


The docs for the WMI Registry provider are here:

  http://msdn.microsoft.com/en-us/library/aa393664%28VS.85%29.aspx

and you probably want this:

  http://msdn.microsoft.com/en-us/library/aa390788%28v=VS.85%29.aspx



With _winreg I could just do:
keyPath = _winreg.ConnectRegistry(r"\\" + ip_a,_winreg.HKEY_LOCAL_MACHINE)
try:
   hKey = _winreg.OpenKey (keyPath,
r"SYSTEM\CurrentControlSet\services\Tcpip\Parameters", 0,
_winreg.KEY_READ)
   value,type = _winreg.QueryValueEx(hKey,"Domain")

Also, is there a performance hit with WMI where perhaps I want to try
to connect with the inherited credentials using _winreg first and then
use the MWI if that fails?


Certainly a consideration. Generally WMI isn't the fastest thing in the
world either to connect nor to query. I suspect a try/except with
_winreg is worth a go, falling through to WMI.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Grant Edwards
On 2010-04-07, Tom Evans  wrote:
> [ Please keep me cc'ed, I'm not subscribed ]

Sorry.  I post via gmane.org, so cc'ing you would require some extra
work, and I'm too lazy.

> I've written a bunch of internal libraries for my company, and they
> all use two space indents, and I'd like to be more consistent and
> conform to PEP-8 as much as I can.
>
> My problem is I would like to be certain that any changes do not
> alter the logic of the libraries. When doing this in C, I would
> simply compile each module to an object file, calculate the MD5 of
> the object file, then make the whitespace changes, recompile the
> object file and compare the checksums. If the checksums match, then
> the files are equivalent.

In my experience, that doesn't work.  Whitespace changes can effect
line numbers, so object files containing debug info will differ.  Many
object format also contain other "meta-data" about date, time, path of
source file, etc. that can differ between semantically equivalent
files.

> Is there any way to do something semantically the same as this with python?

Have you tried compiling the python files and compare the resulting
.pyc files?

-- 
Grant Edwards   grant.b.edwardsYow! I selected E5 ... but
  at   I didn't hear "Sam the Sham
  gmail.comand the Pharoahs"!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simplify Python

2010-04-07 Thread AlienBaby
On 6 Apr, 20:04, ja1lbr3ak  wrote:
> I'm trying to teach myself Python, and so have been simplifying a
> calculator program that I wrote. The original was 77 lines for the
> same functionality. Problem is, I've hit a wall. Can anyone help?
>
> loop = input("Enter 1 for the calculator, 2 for the Fibonacci
> sequence, or something else to quit: ")
> while loop < 3 and loop > 0:
>     if loop == 1:
>         print input("\nPut in an equation: ")
>     if loop == 2:
>         a, b, n = 1, 1, (input("\nWhat Fibonacci number do you want to
> go to? "))
>         while n > 0:
>             print a
>             a, b, n = b, a+b, n-1
>     loop = input("\nEnter 1 for the calculator, 2 for the Fibonacci
> sequence, or something else to quit: ")


To replicate what you have above, I would do something like;

quit=False
while not quit:
choice=input('1 for calc, 2 for fib, any other to quit')
if choice==1:
print input('Enter expression: ')
elif choice==2:
a, b, n = 1, 1, input("\nWhat Fibonacci number do you want to 
go to?
")
 while n > 0:
print a
a, b, n = b, a+b, n-1
else:
quit=True



?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 11:53:58 -0300, Tom Evans   
escribió:



[ Please keep me cc'ed, I'm not subscribed ]


Sorry; you may read this at  
http://groups.google.com/group/comp.lang.python/



I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.


reindent.py (in the Tools directory of your Python installation) does  
exactly that.



My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.


If you only reindent the code (without adding/removing lines) then you can  
compare the compiled .pyc files (excluding the first 8 bytes that contain  
a magic number and the source file timestamp). Remember that code objects  
contain line number information.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 11:53:58 -0300, Tom Evans   
escribió:



[ Please keep me cc'ed, I'm not subscribed ]


Sorry; you may read this at  
http://groups.google.com/group/comp.lang.python/



I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.


reindent.py (in the Tools directory of your Python installation) does  
exactly that.



My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.


If you only reindent the code (without adding/removing lines) then you can  
compare the compiled .pyc files (excluding the first 8 bytes that contain  
a magic number and the source file timestamp). Remember that code objects  
contain line number information.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Tom Evans
On Wed, Apr 7, 2010 at 4:10 PM, geremy condra  wrote:
> On Wed, Apr 7, 2010 at 10:53 AM, Tom Evans  wrote:
>> [ Please keep me cc'ed, I'm not subscribed ]
>>
>> Hi all
>>
>> I've written a bunch of internal libraries for my company, and they
>> all use two space indents, and I'd like to be more consistent and
>> conform to PEP-8 as much as I can.
>>
>> My problem is I would like to be certain that any changes do not alter
>> the logic of the libraries. When doing this in C, I would simply
>> compile each module to an object file, calculate the MD5 of the object
>> file, then make the whitespace changes, recompile the object file and
>> compare the checksums. If the checksums match, then the files are
>> equivalent.
>>
>> Is there any way to do something semantically the same as this with python?
>
> Probably the logical thing would be to run your test suite against
> it, but assuming that's not an option, you could run the whole
> thing through dis and check that the bytecode is identical. There's
> probably an easier way to do this though.
>
> Geremy Condra
>

dis looks like it may be interesting.

I had looked a little at the bytecode, but only enough to rule out md5
sums as a solution. Looking closer at the bytecode for a simple
module, it seems like only a few bytes change (see below for hexdumps
of the pyc).

So in this case, only bytes 5 and 6 changed, the rest of the file
remains exactly the same. Looks like I need to do some digging to find
out what those bytes mean.

Cheers

Tom

2 space indents:

  d1 f2 0d 0a 51 a7 bc 4b  63 00 00 00 00 00 00 00  |Q..Kc...|
0010  00 02 00 00 00 40 00 00  00 73 28 00 00 00 64 00  |[email protected](...d.|
0020  00 84 00 00 5a 00 00 65  01 00 64 01 00 6a 02 00  |Z..e..d..j..|
0030  6f 0e 00 01 65 00 00 65  02 00 83 01 00 01 6e 01  |o...e..e..n.|
0040  00 01 64 02 00 53 28 03  00 00 00 63 01 00 00 00  |..d..S(c|
0050  01 00 00 00 03 00 00 00  43 00 00 00 73 20 00 00  |C...s ..|
0060  00 64 01 00 47 48 7c 00  00 6f 10 00 01 68 01 00  |.d..GH|..o...h..|
0070  64 02 00 64 01 00 36 47  48 6e 01 00 01 64 00 00  |d..d..6GHn...d..|
0080  53 28 03 00 00 00 4e 74  05 00 00 00 68 65 6c 6c  |S(Nthell|
0090  6f 74 05 00 00 00 77 6f  72 6c 64 28 00 00 00 00  |otworld(|
00a0  28 01 00 00 00 74 03 00  00 00 62 61 72 28 00 00  |(tbar(..|
00b0  00 00 28 00 00 00 00 73  0e 00 00 00 74 65 73 74  |..(stest|
00c0  6c 69 62 2f 66 6f 6f 2e  70 79 74 03 00 00 00 66  |lib/foo.pytf|
00d0  6f 6f 01 00 00 00 73 08  00 00 00 00 01 05 01 07  |oos.|
00e0  01 03 01 74 08 00 00 00  5f 5f 6d 61 69 6e 5f 5f  |...t__main__|
00f0  4e 28 03 00 00 00 52 03  00 00 00 74 08 00 00 00  |N(Rt|
0100  5f 5f 6e 61 6d 65 5f 5f  74 04 00 00 00 54 72 75  |__name__tTru|
0110  65 28 00 00 00 00 28 00  00 00 00 28 00 00 00 00  |e(((|
0120  73 0e 00 00 00 74 65 73  74 6c 69 62 2f 66 6f 6f  |stestlib/foo|
0130  2e 70 79 74 08 00 00 00  3c 6d 6f 64 75 6c 65 3e  |.pyt|
0140  01 00 00 00 73 04 00 00  00 09 07 0d 01   |s|
014d


4 space indents:

  d1 f2 0d 0a 51 a7 bc 4b  63 00 00 00 00 00 00 00  |Q..Kc...|
0010  00 02 00 00 00 40 00 00  00 73 28 00 00 00 64 00  |[email protected](...d.|
0020  00 84 00 00 5a 00 00 65  01 00 64 01 00 6a 02 00  |Z..e..d..j..|
0030  6f 0e 00 01 65 00 00 65  02 00 83 01 00 01 6e 01  |o...e..e..n.|
0040  00 01 64 02 00 53 28 03  00 00 00 63 01 00 00 00  |..d..S(c|
0050  01 00 00 00 03 00 00 00  43 00 00 00 73 20 00 00  |C...s ..|
0060  00 64 01 00 47 48 7c 00  00 6f 10 00 01 68 01 00  |.d..GH|..o...h..|
0070  64 02 00 64 01 00 36 47  48 6e 01 00 01 64 00 00  |d..d..6GHn...d..|
0080  53 28 03 00 00 00 4e 74  05 00 00 00 68 65 6c 6c  |S(Nthell|
0090  6f 74 05 00 00 00 77 6f  72 6c 64 28 00 00 00 00  |otworld(|
00a0  28 01 00 00 00 74 03 00  00 00 62 61 72 28 00 00  |(tbar(..|
00b0  00 00 28 00 00 00 00 73  0e 00 00 00 74 65 73 74  |..(stest|
00c0  6c 69 62 2f 66 6f 6f 2e  70 79 74 03 00 00 00 66  |lib/foo.pytf|
00d0  6f 6f 01 00 00 00 73 08  00 00 00 00 01 05 01 07  |oos.|
00e0  01 03 01 74 08 00 00 00  5f 5f 6d 61 69 6e 5f 5f  |...t__main__|
00f0  4e 28 03 00 00 00 52 03  00 00 00 74 08 00 00 00  |N(Rt|
0100  5f 5f 6e 61 6d 65 5f 5f  74 04 00 00 00 54 72 75  |__name__tTru|
0110  65 28 00 00 00 00 28 00  00 00 00 28 00 00 00 00  |e(((|
0120  73 0e 00 00 00 74 65 73  74 6c 69 62 2f 66 6f 6f  |stestlib/foo|
0130  2e 70 79 74 08 00 00 00  3c 6d 6f 64 75 6c 65 3e  |.pyt|
0140  01 00 00 00 73 04 00 00  00 09 07 0d 01   |s|
014d

python code: testlib/foo.py

def foo(bar):
  print "hello"
  if bar:
print {
'hello': 'world'
  }

if __name__ == "_

Re: Striving for PEP-8 compliance

2010-04-07 Thread Robert Kern

On 2010-04-07 11:06 AM, Tom Evans wrote:

On Wed, Apr 7, 2010 at 4:10 PM, geremy condra  wrote:

On Wed, Apr 7, 2010 at 10:53 AM, Tom Evans  wrote:

[ Please keep me cc'ed, I'm not subscribed ]

Hi all

I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.

My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.

Is there any way to do something semantically the same as this with python?


Probably the logical thing would be to run your test suite against
it, but assuming that's not an option, you could run the whole
thing through dis and check that the bytecode is identical. There's
probably an easier way to do this though.

Geremy Condra



dis looks like it may be interesting.

I had looked a little at the bytecode, but only enough to rule out md5
sums as a solution. Looking closer at the bytecode for a simple
module, it seems like only a few bytes change (see below for hexdumps
of the pyc).

So in this case, only bytes 5 and 6 changed, the rest of the file
remains exactly the same. Looks like I need to do some digging to find
out what those bytes mean.


You will also have to be careful about docstrings. If you are cleaning up for 
style reasons, you will also end up indenting the triple-quoted docstrings and 
thus change their contents. This will be reflected in the bytecode.


In [1]: def f():
   ...: """This is
   ...: a docstring.
   ...: """
   ...:
   ...:

In [2]: def g():
   ...:   """This is
   ...:   a docstring.
   ...:   """
   ...:
   ...:

In [3]: f.__doc__
Out[3]: 'This is \na docstring.\n'

In [4]: g.__doc__
Out[4]: 'This is\n  a docstring.\n  '


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank" "newyork job exchange" http://jobsinnewyork-usa.blogspot.com/

2010-04-07 Thread saima81
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in newyork city" "jobs in newyork
newyork" "jobs in newyork usa" "newyork jobs" "newyork jobbank"
"newyork job exchange" http://jobsinnewyork-usa.blogspot.com/
"jobs in newyork " "hotel jobs in

Re: (a==b) ? 'Yes' : 'No'

2010-04-07 Thread Emile van Sebille

On 4/6/2010 9:20 PM Steven D'Aprano said...

On Tue, 06 Apr 2010 16:54:18 +, Duncan Booth wrote:

Most old hands would (IMHO) write the if statements out in full,
though some might remember that Python comes 'batteries included':

  from bisect import bisect
  WEIGHTS = [100, 250, 500, 1000]
  STAMPS = [44, 60, 80, 100, 120]

  ...
  stamp = STAMPS[bisect(WEIGHTS,weight)]



Isn't that an awfully heavyweight and obfuscated solution for choosing
between five options? Fifty-five options, absolutely, but five?



Would it be easier to digest as:

from bisect import bisect as selectindex #

WEIGHTLIMITS = [100, 250, 500, 1000]
POSTAGEAMOUNTS = [44, 60, 80, 100, 120]

postage = POSTAGEAMOUNTS[selectindex(WEIGHTLIMITS, weight)]

---

I've used bisect this way for some time -- I think Tim may have pointed 
it out -- and it's been handy ever since.


Emile


--
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Tim Arnold
On Apr 6, 11:19 am, Jean-Michel Pichavant 
wrote:
> Tim Arnold wrote:
> > Hi,
> > I have a few classes that manipulate documents. One is really a
> > process that I use a class for just to bundle a bunch of functions
> > together (and to keep my call signatures the same for each of my
> > manipulator classes).
>
> > So my question is whether it's bad practice to set things up so each
> > method operates on self.document or should I pass document around from
> > one function to the next?
> > pseudo code:
>
> > class ManipulatorA(object):
> >     def process(self, document):
> >         document = self.do_one_thing(document)
> >         document = self.do_another_thing(document)
> >         # bunch of similar lines
> >         return document
>
> > or
>
> > class ManipulatorA(object):
> >     def process(self, document):
> >         self.document = document
> >         self.do_one_thing() # operates on self.document
> >         self.do_another_thing()
> >         # bunch of similar lines
> >         return self.document
>
> > I ask because I've been told that the first case is easier to
> > understand. I never thought of it before, so I'd appreciate any
> > comments.
> > thanks,
> > --Tim
>
> Usually, when using classes as namespace, functions are declared as
> static (or as classmethod if required).
> e.g.
>
> class Foo:
>     @classmethod
>     def process(cls, document):
>         print 'process of'
>         cls.foo(document)
>
>     @staticmethod
>     def foo(document):
>         print document
>
> In [5]: Foo.process('my document')
> process of
> my document
>
> There is no more question about self, 'cause there is no more self. You
> don't need to create any instance of Foo neither.
>
> JM

Thanks for the input. I had always wondered about static methods; I'd
ask myself "why don't they just write a function in the first place?"

Now I see why. My situation poses a problem that I guess static
methods were invented to solve. And it settles the question about
using self.document since there is no longer any self. And as Bruno
says, it's easier to understand and refactor.

thanks,
--Tim
-- 
http://mail.python.org/mailman/listinfo/python-list


[2.5.1/cookielib] How to display specific cookie?

2010-04-07 Thread Gilles Ganault
Hello

I'm using ActivePython 2.5.1 and the cookielib package to retrieve web
pages.

I'd like to display a given cookie from the cookiejar instead of the
whole thing:


#OK
for index, cookie in enumerate(cj):
print index, '  :  ', cookie

#How to display just PHPSESSID?
#AttributeError: CookieJar instance has no attribute '__getitem__'
print "PHPSESSID: %s" % cj['PHPSESSID']


I'm sure it's very simple but googling for this didn't return samples.

Thank you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: lambda with floats

2010-04-07 Thread Peter Pearson
On Tue, 06 Apr 2010 23:16:18 -0400, monkeys paw  wrote:
> I have the following acre meter which works for integers,
> how do i convert this to float? I tried
>
> return float ((208.0 * 208.0) * n)
>
> >>> def s(n):
> ...   return lambda x: (208 * 208) * n
> ...
> >>> f = s(1)
> >>> f(1)
> 43264
> >>> 208 * 208
> 43264
> >>> f(.25)
> 43264

The expression "lambda x: (208 * 208) * n" is independent of x.
Is that what you intended?


-- 
To email me, substitute nowhere->spamcop, invalid->net.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Manuel Graune
Hello Johan,

thanks to you (and everyone else who answered) for your effort.

Johan Grönqvist  writes:

> Manuel Graune skrev:
>> Manuel Graune  writes:
>>
>> Just as an additional example, let's assume I'd want to add the area of
>> to circles.
>> [...]
>> which can be explained to anyone who knows
>> basic math and is not at all interested in
>> python.
>>
>
> Third attempt. The markup now includes tagging of different parts of
> the code, and printing parts of the source based on a tag.
>

after playing around for a while, this is what I finally ended up with:

8<8< source ---8<
#! /usr/bin/python
## Show
# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys
##

class Source_Printer(object):
def __init__(self):
self.is_printing= False
with open(sys.argv[0]) as file:
self.lines=(line for line in file.readlines())
for line in self.lines:
if line.startswith("print_source"):
break 
elif line == "##\n":
self.is_printing= False
elif line.startswith("## Show"):
print("\n")
self.is_printing= True
elif self.is_printing:
print(line,end="")
def __call__(self):
for line in self.lines:
if line == "##\n" or line.startswith("print_source"):
if self.is_printing:
self.is_printing= False
break
else:
self.is_printing= False
elif line.startswith("## Show"):
print("\n")
self.is_printing= True
elif self.is_printing:
print(line, end="")


print_source= Source_Printer()
## Show
#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
##
print_source()

print ("Area of Circle 1:\t", A1)

## Show
#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
##
# This is a comment that won't be printed

print_source()
print ("Area of Circle 2:\t", A2)

# This is another one
Sum_Of_Areas= A1 + A2
print ("Sum of areas:\t", Sum_Of_Areas) 

8<8< result: ---8<

# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys


#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
Area of Circle 1:7.06858347058


#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
Area of Circle 2:19.6349540849
Sum of areas:26.703537

8<8< result: ---8<

Regards,

Manuel



-- 
A hundred men did the rational thing. The sum of those rational choices was
called panic. Neal Stephenson -- System of the world
http://www.graune.org/GnuPG_pubkey.asc
Key fingerprint = 1E44 9CBD DEE4 9E07 5E0A  5828 5476 7E92 2DB4 3C99
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Raymond Hettinger
[Gustavo Nare]
> In other words: The more different elements two collections have, the
> faster it is to compare them as sets. And as a consequence, the more
> equivalent elements two collections have, the faster it is to compare
> them as lists.
>
> Is this correct?

If two collections are equal, then comparing them as a set is always
slower than comparing them as a list.  Both have to call __eq__ for
every element, but sets have to search for each element while lists
can just iterate over consecutive pointers.

If the two collections have unequal sizes, then both ways immediately
return unequal.

If the two collections are unequal but have the same size, then
the comparison time is data dependent (when the first mismatch
is found).


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Manuel Graune
Hello Johan,

thanks to you (and everyone else who answered) for your effort.

Johan Grönqvist  writes:

> Manuel Graune skrev:
>> Manuel Graune  writes:
>>
>> Just as an additional example, let's assume I'd want to add the area of
>> to circles.
>> [...]
>> which can be explained to anyone who knows
>> basic math and is not at all interested in
>> python.
>>
>
> Third attempt. The markup now includes tagging of different parts of
> the code, and printing parts of the source based on a tag.
>

after playing around for a while, this is what I finally ended up with:

8<8< source ---8<
#! /usr/bin/python
## Show
# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys
##

class Source_Printer(object):
def __init__(self):
self.is_printing= False
with open(sys.argv[0]) as file:
self.lines= iter(file.readlines())
for line in self.lines:
if line.startswith("print_source"):
break 
elif line == "##\n":
self.is_printing= False
elif line.startswith("## Show"):
print("\n")
self.is_printing= True
elif self.is_printing:
print(line,end="")
def __call__(self):
for line in self.lines:
if line == "##\n" or line.startswith("print_source"):
if self.is_printing:
self.is_printing= False
break
else:
self.is_printing= False
elif line.startswith("## Show"):
print("\n")
self.is_printing= True
elif self.is_printing:
print(line, end="")


print_source= Source_Printer()
## Show
#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
##
print_source()

print ("Area of Circle 1:\t", A1)

## Show
#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
##
# This is a comment that won't be printed

print_source()
print ("Area of Circle 2:\t", A2)

# This is another one
Sum_Of_Areas= A1 + A2
print ("Sum of areas:\t", Sum_Of_Areas) 

8<8< result: ---8<

# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys


#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
Area of Circle 1:7.06858347058


#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
Area of Circle 2:19.6349540849
Sum of areas:26.703537

8<8< result: ---8<

Regards,

Manuel



-- 
A hundred men did the rational thing. The sum of those rational choices was
called panic. Neal Stephenson -- System of the world
http://www.graune.org/GnuPG_pubkey.asc
Key fingerprint = 1E44 9CBD DEE4 9E07 5E0A  5828 5476 7E92 2DB4 3C99
-- 
http://mail.python.org/mailman/listinfo/python-list


+Hi+

2010-04-07 Thread Matt Burson
http://sites.google.com/site/fgu45ythjg/rfea8i

-- 
Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


[Q] raise exception with fake filename and linenumber

2010-04-07 Thread kwatch
Hi all,

Is it possible to raise exception with custom traceback to specify
file and line?

Situation
=

I'm creating a certain parser.
I want to report syntax error with the same format as other exception.

Example
===

parser.py:
-
1: def parse(filename):
2: if something_is_wrong():
3: linenum = 123
4: raise Exception("syntax error on %s, line %s" % (filename,
linenum))
5:
6: parse('example.file')
-

current result:
-
Traceback (most recent call last):
  File "/tmp/parser.py", line 6, in 
parse('example.file')
  File "/tmp/parser.py", line 4, in parse
raise Exception("syntax error on %s, line %s" % (filename,
linenum))
Exception: syntax error on example.file, line 123
-

my hope is:
-
Traceback (most recent call last):
  File "/tmp/parser.py", line 6, in 
parse('example.file')
  File "/tmp/parser.py", line 4, in parse
raise Exception("syntax error on %s, line %s" % (filename,
linenum))
  File "/tmp/example.file", line 123
foreach item in items   # wrong syntax line
Exception: syntax error
-

I guess I must create dummy traceback data, but I don't know how to do
it.
Could you give me an advice?

Thank you.

--
regards,
makoto kuwata
-- 
http://mail.python.org/mailman/listinfo/python-list


fcntl, serial ports and serial signals on RS232.

2010-04-07 Thread Max Kotasek
Hello to all out there,

I'm trying to figure out how to parse the responses from fcntl.ioctl()
calls that modify the serial lines in a way that asserts that the line
is now changed.  For example I may want to drop RTS explicitly, and
assert that the line has been dropped before returning.

Here is a brief snippet of code that I've been using to do that, but
not sure what to do with the returned response:

def set_RTS(self, state=True):
  if self.fd is None:
return 0

  p = struct.pack('I', termios.TIOCM_RTS)
  if state:
return fcntl.ioctl(self.fd, termios.TIOCMBIS, p)
  else:
return fcntl.ioctl(self.fd, termios.TIOCMBIC, p)

The problem is I get responses like '\x01\x00\x00\x00', or
'\x02\x00\x00\x00'  and I'm not sure what they mean.  I tried doing
illogical things like settings CTS using the TIOCM_CTS flag and I end
up just getting back a slightly different binary packed 32 bit integer
(in that case '\x20\x00\x00\x00').  The above example has self.fd
being defined as os.open('/dev/ttyS0', os.O_RDWR | os.O_NONBLOCK).

Is someone familiar with manipulating serial signals like this in
python?  Am I even taking the right approach by using the fcntl.ioctl
call?  The environment is a ubuntu 8.04 distribution.  Unfortunately
due to other limitations, I can't use/extend pyserial, though I would
like to.

I appreciate any advice on this matter,
Max
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recommend Commercial graphing library

2010-04-07 Thread David Bolen
AlienBaby  writes:

> I'd be grateful for any suggestions / pointers to something useful,

Ignoring the commercial vs. open source discussion, although it was a
few years ago, I found Chart Director (http://www.advsofteng.com/) to
work very well, with plenty of platform and language support,
including Python.

-- David
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Impersonating a Different Logon

2010-04-07 Thread David Bolen
Kevin Holleran  writes:

> Thanks, I was able to connect to the remote machine.  However, how do
> I query for a very specific key value?  I have to scan hundreds of
> machines and need want to reduce what I am querying.  I would like to
> be able to scan a very specific key and report on its value.

Any remote machine connection should automatically used any cached
credentials for that machine, since Windows always uses the same
credentials for a given target machine.

So if you were to access a share with the appropriate credentials,
using _winreg after that point should work.  I normally use
\\machine\ipc$ (even from the command line) which should always exist.

You can use the wrappers in the PyWin32 library (win32net) to access
and then release the share with NetUseAdd and NetUseDel.

Of course, the extra step of accessing the share might or might not be
any faster than WMI, but it would have a small advantage of not
needing WMI support on the target machine - though that may be a
non-issue nowadays.

-- David
-- 
http://mail.python.org/mailman/listinfo/python-list


remote multiprocessing, shared object

2010-04-07 Thread Norm Matloff
Should be a simple question, but I can't seem to make it work from my
understanding of the docs.

I want to use the multiprocessing module with remote clients, accessing
shared lists.  I gather one is supposed to use register(), but I don't
see exactly how.  I'd like to have the clients read and write the shared
list directly, not via some kind of get() and set() functions.  It's
clear how to do this in a shared-memory setting, but how can one do it
across a network, i.e. with serve_forever(), connect() etc.?

Any help, especially with a concrete example, would be much appreciated.
Thanks.

Norm

-- 
http://mail.python.org/mailman/listinfo/python-list


Regex driving me crazy...

2010-04-07 Thread J
Can someone make me un-crazy?

I have a bit of code that right now, looks like this:

status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
status = re.sub(' (?= )(?=([^"]*"[^"]*")*[^"]*$)', ":",status)
print status

Basically, it pulls the first actual line of data from the return you
get when you use smartctl to look at a hard disk's selftest log.

The raw data looks like this:

# 1  Short offline   Completed without error   00%   679 -

Unfortunately, all that whitespace is arbitrary single space
characters.  And I am interested in the string that appears in the
third column, which changes as the test runs and then completes.  So
in the example, "Completed without error"

The regex I have up there doesn't quite work, as it seems to be
subbing EVERY space (or at least in instances of more than one space)
to a ':' like this:

# 1: Short offline:: Completed without error:: 00%:: 679 -

Ultimately, what I'm trying to do is either replace any space that is
> one space wiht a delimiter, then split the result into a list and
get the third item.

OR, if there's a smarter, shorter, or better way of doing it, I'd love to know.

The end result should pull the whole string in the middle of that
output line, and then I can use that to compare to a list of possible
output strings to determine if the test is still running, has
completed successfully, or failed.

Unfortunately, my google-fu fails right now, and my Regex powers were
always rather weak anyway...

So any ideas on what the best way to proceed with this would be?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-07, J  wrote:

> Can someone make me un-crazy?

Definitely.  Regex is driving you crazy, so don't use a regex.

  inputString = "# 1  Short offline   Completed without error 00%   
679 -"
  
  print ' '.join(inputString.split()[4:-3])

> So any ideas on what the best way to proceed with this would be?

Anytime you have a problem with a regex, the first thing you should
ask yourself:  "do I really, _really_ need a regex?

Hint: the answer is usually "no".

-- 
Grant Edwards   grant.b.edwardsYow! I'm continually AMAZED
  at   at th'breathtaking effects
  gmail.comof WIND EROSION!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Michael Torrie
On 04/06/2010 12:40 PM, Manuel Graune wrote:
> Hello everyone,
> 
> I am looking for ways to use a python file as a substitute for simple
> pen and paper calculations. At the moment I mainly use a combination
> of triple-quoted strings, exec and print (Yes, I know it's not exactly
> elegant). 

This isn't quite along the lines that this thread is going, but it seems
to me that a program like "reinteract" is about what I want to replace a
pen and paper with a python-based thing.  Last time I used it, it was
buggy, but if this concept was developed, it would totally rock:

http://fishsoup.net/software/reinteract/

-- 
http://mail.python.org/mailman/listinfo/python-list


order that destructors get called?

2010-04-07 Thread Brendan Miller
I'm used to C++ where destrcutors get called in reverse order of construction
like this:

{
Foo foo;
Bar bar;

// calls Bar::~Bar()
// calls Foo::~Foo()
}

I'm writing a ctypes wrapper for some native code, and I need to manage some
memory. I'm wrapping the memory in a python class that deletes the underlying
 memory when the python class's reference count hits zero.

When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is released
*before* o is released. So when __del__ get called, delete_my_resource is now
None.

Obviously, MyClass needs to hang onto a reference to delete_my_resource.

What I'm wondering is if there's any documented order that reference counts
get decremented when a module is released or when a program terminates.

What I would expect is "reverse order of definition" but obviously that's not
the case.

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: order that destructors get called?

2010-04-07 Thread Stephen Hansen

On 2010-04-07 15:08:14 -0700, Brendan Miller said:

When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is released
*before* o is released. So when __del__ get called, delete_my_resource is now
None.


The first thing Python does when shutting down is go and set the 
module-level value of any names to None; this may or may not cause 
those objects which were previously named such to be destroyed, 
depending on if it drops their reference count to 0.


So if you need to call something in __del__, be sure to save its 
reference for later, so that when __del__ gets called, you can be sure 
the things you need are still alive. Perhaps on MyClass, in its 
__init__, or some such.



What I'm wondering is if there's any documented order that reference counts
get decremented when a module is released or when a program terminates.

What I would expect is "reverse order of definition" but obviously that's not
the case.


AFAIR, every top level name gets set to None first; this causes many 
things to get recycled. There's no order beyond that, though. 
Namespaces are dictionaries, and dictionaries are unordered. So you 
can't really infer any sort of order to the destruction: if you need 
something to be alive when a certain __del__ is called, you have to 
keep a reference to it.


--
--S

... p.s: change the ".invalid" to ".com" in email address to reply privately.

--
http://mail.python.org/mailman/listinfo/python-list


Profiling: Interpreting tottime

2010-04-07 Thread Nikolaus Rath
Hello,

Consider the following function:

def check_s3_refcounts():
"""Check s3 object reference counts"""

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query("SELECT id, refcount FROM s3_objects"):

refcount2 = conn.get_val("SELECT COUNT(inode) FROM blocks WHERE 
s3key=?",
 (key,))
if refcount != refcount2:
log_error("S3 object %s has invalid refcount, setting from %d to 
%d",
  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute("UPDATE s3_objects SET refcount=? WHERE id=?",
 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269 fsck.py:270(check_s3_refcounts)

So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Puzzled,

   -Nikolaus

-- 
 »Time flies like an arrow, fruit flies like a Banana.«

  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Lie Ryan
On 04/07/10 18:34, Bruno Desthuilliers wrote:
> Lie Ryan a écrit :
> (snip)
> 
>> Since in function in python is a first-class object, you can instead do
>> something like:
>>
>> def process(document):
>> # note: document should encapsulate its own logic
>> document.do_one_thing()
> 
> Obvious case of encapsulation abuse here. Should a file object
> encapsulate all the csv parsing logic ? (and the html parsing, xml
> parsing, image manipulation etc...) ? Should a "model" object
> encapsulate the presentation logic ? I could go on for hours here...

Yes, but no; you're taking it out of context. Is {csv|html|xml|image}
parsing logic a document's logic? Is presentation a document's logic? If
they're not, then they do not belong in document.
-- 
http://mail.python.org/mailman/listinfo/python-list


help req: installing debugging symbols

2010-04-07 Thread sanam singh

Hi,I am using ununtu 9.10. I want to  install  a version of Python that was 
compiled with debug symbols.But if I delete python from ubuntu it would 
definitely stop working . And python comes preintalled in ubuntu without 
debugging symbols.How can i install python with debugging symbols 
?Thanks.Regards,Sanam  
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969-- 
http://mail.python.org/mailman/listinfo/python-list


ftp and python

2010-04-07 Thread Matjaz Pfefferer

Hi,
I'm Py newbie and I have some beginners problems with ftp handling.
What would be the easiest way to copy files from one ftp folder to another 
without downloading them to local system?
Are there any snippets for this task (I couldnt find example like this)

Thx
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Lawrence D'Oliveiro
In message , Tom Evans 
wrote:

> I've written a bunch of internal libraries for my company, and they
> all use two space indents, and I'd like to be more consistent and
> conform to PEP-8 as much as I can.

“A foolish consistency is the hobgoblin of little minds”
— Ralph Waldo Emerson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Lawrence D'Oliveiro
In message , Gabriel 
Genellina wrote:

> If you only reindent the code (without adding/removing lines) then you can
> compare the compiled .pyc files (excluding the first 8 bytes that contain
> a magic number and the source file timestamp). Remember that code objects
> contain line number information.

Anybody who ever creates another indentation-controlled language should be 
beaten to death with a Guido van Rossum voodoo doll.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 4:40 pm, J  wrote:
> Can someone make me un-crazy?
>
> I have a bit of code that right now, looks like this:
>
> status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
>         status = re.sub(' (?= )(?=([^"]*"[^"]*")*[^"]*$)', ":",status)
>         print status
>
> Basically, it pulls the first actual line of data from the return you
> get when you use smartctl to look at a hard disk's selftest log.
>
> The raw data looks like this:
>
> # 1  Short offline       Completed without error       00%       679         -
>
> Unfortunately, all that whitespace is arbitrary single space
> characters.  And I am interested in the string that appears in the
> third column, which changes as the test runs and then completes.  So
> in the example, "Completed without error"
>
> The regex I have up there doesn't quite work, as it seems to be
> subbing EVERY space (or at least in instances of more than one space)
> to a ':' like this:
>
> # 1: Short offline:: Completed without error:: 00%:: 679 -
>
> Ultimately, what I'm trying to do is either replace any space that is> one 
> space wiht a delimiter, then split the result into a list and
>
> get the third item.
>
> OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
> know.
>
> The end result should pull the whole string in the middle of that
> output line, and then I can use that to compare to a list of possible
> output strings to determine if the test is still running, has
> completed successfully, or failed.
>
> Unfortunately, my google-fu fails right now, and my Regex powers were
> always rather weak anyway...
>
> So any ideas on what the best way to proceed with this would be?

You mean like this?

>>> import re
>>> re.split(' {2,}', '# 1  Short offline   Completed without error   
>>> 00%')
['# 1', 'Short offline', 'Completed without error', '00%']
>>>

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 4:47 pm, Grant Edwards  wrote:
> On 2010-04-07, J  wrote:
>
> > Can someone make me un-crazy?
>
> Definitely.  Regex is driving you crazy, so don't use a regex.
>
>   inputString = "# 1  Short offline       Completed without error     00%     
>   679         -"
>
>   print ' '.join(inputString.split()[4:-3])
>
> > So any ideas on what the best way to proceed with this would be?
>
> Anytime you have a problem with a regex, the first thing you should
> ask yourself:  "do I really, _really_ need a regex?
>
> Hint: the answer is usually "no".
>
> --
> Grant Edwards               grant.b.edwards        Yow! I'm continually AMAZED
>                                   at               at th'breathtaking effects
>                               gmail.com            of WIND EROSION!!

OK, fine.  Post a better solution to this problem than:

>>> import re
>>> re.split(' {2,}', '# 1  Short offline   Completed without error   
>>> 00%')
['# 1', 'Short offline', 'Completed without error', '00%']
>>>

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help req: installing debugging symbols

2010-04-07 Thread Shashwat Anand
Install python in a different directory, use $prefix for that. Change PATH
value accordingly


2010/4/5 sanam singh 

>  Hi,
>
> I am using ununtu 9.10. I want to  install  a version of Python that was
> compiled with debug symbols.
>
> But if I delete python from ubuntu it would definitely stop working . And
> python comes preintalled in ubuntu without debuggi ng symbols.
>
> How can i install python with debugging symbols ?
>
> Thanks.
>
> Regards,
>
> Sanam
>
>
>
> --
> Hotmail: Trusted email with Microsoft’s powerful SPAM protection. Sign up
> now. 
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 7:49 pm, Patrick Maupin  wrote:
> On Apr 7, 4:40 pm, J  wrote:
>
>
>
> > Can someone make me un-crazy?
>
> > I have a bit of code that right now, looks like this:
>
> > status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
> >         status = re.sub(' (?= )(?=([^"]*"[^"]*")*[^"]*$)', ":",status)
> >         print status
>
> > Basically, it pulls the first actual line of data from the return you
> > get when you use smartctl to look at a hard disk's selftest log.
>
> > The raw data looks like this:
>
> > # 1  Short offline       Completed without error       00%       679        
> >  -
>
> > Unfortunately, all that whitespace is arbitrary single space
> > characters.  And I am interested in the string that appears in the
> > third column, which changes as the test runs and then completes.  So
> > in the example, "Completed without error"
>
> > The regex I have up there doesn't quite work, as it seems to be
> > subbing EVERY space (or at least in instances of more than one space)
> > to a ':' like this:
>
> > # 1: Short offline:: Completed without error:: 00%:: 
> > 679 -
>
> > Ultimately, what I'm trying to do is either replace any space that is> one 
> > space wiht a delimiter, then split the result into a list and
>
> > get the third item.
>
> > OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
> > know.
>
> > The end result should pull the whole string in the middle of that
> > output line, and then I can use that to compare to a list of possible
> > output strings to determine if the test is still running, has
> > completed successfully, or failed.
>
> > Unfortunately, my google-fu fails right now, and my Regex powers were
> > always rather weak anyway...
>
> > So any ideas on what the best way to proceed with this would be?
>
> You mean like this?
>
> >>> import re
> >>> re.split(' {2,}', '# 1  Short offline       Completed without error       
> >>> 00%')
>
> ['# 1', 'Short offline', 'Completed without error', '00%']
>
>
>
> Regards,
> Pat

BTW, although I find it annoying when people say "don't do that" when
"that" is a perfectly good thing to do, and although I also find it
annoying when people tell you what not to do without telling you what
*to* do, and although I find the regex solution to this problem to be
quite clean, the equivalent non-regex solution is not terrible, so I
will present it as well, for your viewing pleasure:

>>> [x for x in '# 1  Short offline   Completed without error   
>>> 00%'.split('  ') if x.strip()]
['# 1', 'Short offline', ' Completed without error', ' 00%']

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Chris Rebert
On Wed, Apr 7, 2010 at 5:35 PM, Lawrence D'Oliveiro <@> wrote:
> In message , Gabriel
> Genellina wrote:
>
>> If you only reindent the code (without adding/removing lines) then you can
>> compare the compiled .pyc files (excluding the first 8 bytes that contain
>> a magic number and the source file timestamp). Remember that code objects
>> contain line number information.
>
> Anybody who ever creates another indentation-controlled language should be
> beaten to death with a Guido van Rossum voodoo doll.

I'll go warn Don Syme. :P  I wonder how Microsoft will react.
http://blogs.msdn.com/dsyme/archive/2006/08/24/715626.aspx

Cheers,
Chris
--
http://blog.rebertia.com/2010/01/24/of-braces-and-semicolons/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftp and python

2010-04-07 Thread Tim Chase

Matjaz Pfefferer wrote:

What would be the easiest way to copy files from one ftp
folder to another without downloading them to local system?


As best I can tell, this isn't well-supported by FTP[1] which 
doesn't seem to have a native "copy this file from 
server-location to server-location bypassing the client". 
There's a pair of RNFR/RNTO commands that allow you to rename (or 
perhaps move as well) a file which ftplib.FTP.rename() supports 
but it sounds like you want too copies.


When I've wanted to do this, I've used a non-FTP method, usually 
SSH'ing into the server and just using "cp".  This could work for 
you if you have pycrypto/paramiko installed.


Your last hope would be that your particular FTP server has some 
COPY extension that falls outside of RFC parameters -- something 
that's not a portable solution, but if you're doing a one-off 
script or something in a controlled environment, could work.


Otherwise, you'll likely be stuck slurping the file down just to 
send it back up.


-tkc


[1]
http://en.wikipedia.org/wiki/List_of_FTP_commands




--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Patrick Maupin
On Apr 7, 3:52 am, Chris Rebert  wrote:

> Regular expressions != Parsers

True, but lots of parsers *use* regular expressions in their
tokenizers.  In fact, if you have a pure Python parser, you can often
get huge performance gains by rearranging your code slightly so that
you can use regular expressions in your tokenizer, because that
effectively gives you access to a fast, specialized C library that is
built into practically every Python interpreter on the planet.

> Every time someone tries to parse nested structures using regular
> expressions, Jamie Zawinski kills a puppy.

And yet, if you are parsing stuff in Python, and your parser doesn't
use some specialized C code for tokenization (which will probably be
regular expressions unless you are using mxtexttools or some other
specialized C tokenizer code), your nested structure parser will be
dog slow.

Now, for some applications, the speed just doesn't matter, and for
people who don't yet know the difference between regexps and parsing,
pointing them at PyParsing is certainly doing them a valuable service.

But that's twice today when I've seen people warned off regular
expressions without a cogent explanation that, while the re module is
good at what it does, it really only handles the very lowest level of
a parsing problem.

My 2 cents is that something like PyParsing is absolutely great for
people who want a simple parser without a lot of work.  But if people
use PyParsing, and then find out that (for their particular
application) it isn't fast enough, and then wonder what to do about
it, if all they remember is that somebody told them not to use regular
expressions, they will just come to the false conclusion that pure
Python is too painfully slow for any real world task.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Steven D'Aprano
On Wed, 07 Apr 2010 10:55:10 -0700, Raymond Hettinger wrote:

> [Gustavo Nare]
>> In other words: The more different elements two collections have, the
>> faster it is to compare them as sets. And as a consequence, the more
>> equivalent elements two collections have, the faster it is to compare
>> them as lists.
>>
>> Is this correct?
> 
> If two collections are equal, then comparing them as a set is always
> slower than comparing them as a list.  Both have to call __eq__ for
> every element, but sets have to search for each element while lists can
> just iterate over consecutive pointers.
> 
> If the two collections have unequal sizes, then both ways immediately
> return unequal.


Perhaps I'm misinterpreting what you are saying, but I can't confirm that 
behaviour, at least not for subclasses of list:

>>> class MyList(list):
... def __len__(self):
... return self.n
...
>>> L1 = MyList(range(10))
>>> L2 = MyList(range(10))
>>> L1.n = 9
>>> L2.n = 10
>>> L1 == L2
True
>>> len(L1) == len(L2)
False




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Patrick Maupin
On Apr 7, 8:41 pm, Steven D'Aprano
 wrote:
> On Wed, 07 Apr 2010 10:55:10 -0700, Raymond Hettinger wrote:
> > [Gustavo Nare]
> >> In other words: The more different elements two collections have, the
> >> faster it is to compare them as sets. And as a consequence, the more
> >> equivalent elements two collections have, the faster it is to compare
> >> them as lists.
>
> >> Is this correct?
>
> > If two collections are equal, then comparing them as a set is always
> > slower than comparing them as a list.  Both have to call __eq__ for
> > every element, but sets have to search for each element while lists can
> > just iterate over consecutive pointers.
>
> > If the two collections have unequal sizes, then both ways immediately
> > return unequal.
>
> Perhaps I'm misinterpreting what you are saying, but I can't confirm that
> behaviour, at least not for subclasses of list:
>
> >>> class MyList(list):
>
> ...     def __len__(self):
> ...             return self.n
> ...>>> L1 = MyList(range(10))
> >>> L2 = MyList(range(10))
> >>> L1.n = 9
> >>> L2.n = 10
> >>> L1 == L2
> True
> >>> len(L1) == len(L2)
>
> False
>
> --
> Steven

I think what he is saying is that the list __eq__ method will look at
the list lengths first.  This may or may not be considered a subtle
bug for the edge case you are showing.

If I do the following:

>>> L1 = range(1000)
>>> L2 = range(1000)
>>> L3 = range(1001)
>>> L1 == L2
True
>>> L1 == L3
False
>>>

I don't even need to run timeit -- the "True" takes awhile to print
out, while the "False" prints out immediately.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread James Stroud

Patrick Maupin wrote:

BTW, although I find it annoying when people say "don't do that" when
"that" is a perfectly good thing to do, and although I also find it
annoying when people tell you what not to do without telling you what
*to* do, and although I find the regex solution to this problem to be
quite clean, the equivalent non-regex solution is not terrible


I propose a new way to answer questions on c.l.python that will (1) give 
respondents the pleasure of vague admonishment and (2) actually answer the 
question. The way I propose utilizes the double negative. For example:

"You are doing it wrong! Don't not do re.split('\s{2,}', s[2])."

Please answer this way in the future.

Thank you,
James


--
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:02 pm, James Stroud 
wrote:
> Patrick Maupin wrote:
> > BTW, although I find it annoying when people say "don't do that" when
> > "that" is a perfectly good thing to do, and although I also find it
> > annoying when people tell you what not to do without telling you what
> > *to* do, and although I find the regex solution to this problem to be
> > quite clean, the equivalent non-regex solution is not terrible
>
> I propose a new way to answer questions on c.l.python that will (1) give 
> respondents the pleasure of vague admonishment and (2) actually answer the 
> question. The way I propose utilizes the double negative. For example:
>
> "You are doing it wrong! Don't not do re.split('\s{2,}', s[2])."
>
> Please answer this way in the future.

I most certainly will not consider when that isn't warranted!

OTOH, in general I am more interested in admonishing the authors of
the pseudo-answers than I am the authors of the questions, despite the
fact that I find this hilarious:

http://despair.com/cluelessness.html

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, Patrick Maupin  wrote:
> On Apr 7, 4:47?pm, Grant Edwards  wrote:
>> On 2010-04-07, J  wrote:
>>
>> > Can someone make me un-crazy?
>>
>> Definitely. ?Regex is driving you crazy, so don't use a regex.
>>
>> ? inputString = "# 1 ?Short offline ? ? ? Completed without error ? ? 00% ? 
>> ? ? 679 ? ? ? ? -"
>>
>> ? print ' '.join(inputString.split()[4:-3])
[...]

> OK, fine.  Post a better solution to this problem than:
>
 import re
 re.split(' {2,}', '# 1  Short offline   Completed without error   
 00%')
> ['# 1', 'Short offline', 'Completed without error', '00%']

OK, I'll bite: what's wrong with the solution I already posted?

-- 
Grant

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, James Stroud  wrote:
> Patrick Maupin wrote:
>> BTW, although I find it annoying when people say "don't do that" when
>> "that" is a perfectly good thing to do, and although I also find it
>> annoying when people tell you what not to do without telling you what
>> *to* do, and although I find the regex solution to this problem to be
>> quite clean, the equivalent non-regex solution is not terrible
>
> I propose a new way to answer questions on c.l.python that will (1) give 
> respondents the pleasure of vague admonishment and (2) actually answer the 
> question. The way I propose utilizes the double negative. For example:
>
> "You are doing it wrong! Don't not do re.split('\s{2,}', s[2])."
>
> Please answer this way in the future.

I will certain try to avoid not answering in a manner not unlike that.

-- 
Grant
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:36 pm, Grant Edwards  wrote:
> On 2010-04-08, Patrick Maupin  wrote:> On Apr 7, 4:47?pm, 
> Grant Edwards  wrote:
> >> On 2010-04-07, J  wrote:
>
> >> > Can someone make me un-crazy?
>
> >> Definitely. ?Regex is driving you crazy, so don't use a regex.
>
> >> ? inputString = "# 1 ?Short offline ? ? ? Completed without error ? ? 00% 
> >> ? ? ? 679 ? ? ? ? -"
>
> >> ? print ' '.join(inputString.split()[4:-3])
>
> [...]
>
> > OK, fine.  Post a better solution to this problem than:
>
>  import re
>  re.split(' {2,}', '# 1  Short offline       Completed without error      
>   00%')
> > ['# 1', 'Short offline', 'Completed without error', '00%']
>
> OK, I'll bite: what's wrong with the solution I already posted?
>
> --
> Grant

Sorry, my eyes completely missed your one-liner, so my criticism about
not posting a solution was unwarranted.  I don't think you and I read
the problem the same way (which is probably why I didn't notice your
solution -- because it wasn't solving the problem I thought I saw).

When I saw "And I am interested in the string that appears in the
third column, which changes as the test runs and then completes" I
assumed that, not only could that string change, but so could the one
before it.

I guess my base assumption that anything with words in it could
change.  I was looking at the OP's attempt at a solution, and he
obviously felt he needed to see two or more spaces as an item
delimiter.

(And I got testy because of seeing other IMO unwarranted denigration
of re on the list lately.)

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Steven D'Aprano
On Wed, 07 Apr 2010 18:03:47 -0700, Patrick Maupin wrote:

> BTW, although I find it annoying when people say "don't do that" when
> "that" is a perfectly good thing to do, and although I also find it
> annoying when people tell you what not to do without telling you what
> *to* do, 

Grant did give a perfectly good solution.


> and although I find the regex solution to this problem to be
> quite clean, the equivalent non-regex solution is not terrible, so I
> will present it as well, for your viewing pleasure:
> 
> >>> [x for x in '# 1  Short offline   Completed without error
>   00%'.split('  ') if x.strip()]
> ['# 1', 'Short offline', ' Completed without error', ' 00%']


This is one of the reasons we're so often suspicious of re solutions:


>>> s = '# 1  Short offline       Completed without error       00%'
>>> tre = Timer("re.split(' {2,}', s)", 
... "import re; from __main__ import s")
>>> tsplit = Timer("[x for x in s.split('  ') if x.strip()]", 
... "from __main__ import s")
>>>
>>> re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
True
>>> 
>>> 
>>> min(tre.repeat(repeat=5))
6.1224789619445801
>>> min(tsplit.repeat(repeat=5))
1.8338048458099365


Even when they are correct and not unreadable line-noise, regexes tend to 
be slow. And they get worse as the size of the input increases:

>>> s *= 1000
>>> min(tre.repeat(repeat=5, number=1000))
2.3496899604797363
>>> min(tsplit.repeat(repeat=5, number=1000))
0.41538596153259277
>>>
>>> s *= 10
>>> min(tre.repeat(repeat=5, number=1000))
23.739185094833374
>>> min(tsplit.repeat(repeat=5, number=1000))
4.6444299221038818


And this isn't even one of the pathological O(N**2) or O(2**N) regexes.

Don't get me wrong -- regexes are a useful tool. But if your first 
instinct is to write a regex, you're doing it wrong.


[quote]
A related problem is Perl's over-reliance on regular expressions 
that is exaggerated by advocating regex-based solution in almost 
all O'Reilly books. The latter until recently were the most
authoritative source of published information about Perl. 

While simple regular expression is a beautiful thing and can 
simplify operations with string considerably, overcomplexity in
regular expressions is extremly dangerous: it cannot serve a basis
for serious, professional programming, it is fraught with pitfalls,
a big semantic mess as a result of outgrowing its primary purpose. 
Diagnostic for errors in regular expressions is even weaker then 
for the language itself and here many things are just go unnoticed.
[end quote]

http://www.softpanorama.org/Scripting/Perlbook/Ch01/
place_of_perl_among_other_lang.shtml



Even Larry Wall has criticised Perl's regex culture:

http://dev.perl.org/perl6/doc/design/apo/A05.html




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread J
On Wed, Apr 7, 2010 at 22:45, Patrick Maupin  wrote:

> When I saw "And I am interested in the string that appears in the
> third column, which changes as the test runs and then completes" I
> assumed that, not only could that string change, but so could the one
> before it.
>
> I guess my base assumption that anything with words in it could
> change.  I was looking at the OP's attempt at a solution, and he
> obviously felt he needed to see two or more spaces as an item
> delimiter.

I apologize for the confusion, Pat...

I could have worded that better, but at that point I was A:
Frustrated, B: starving, and C: had my wife nagging me to stop working
to come get something to eat ;-)

What I meant was, in that output string, the phrase in the middle
could change in length...
After looking at the source code for smartctl (part of the
smartmontools package for you linux people) I found the switch that
creates those status messages they vary in character length, some
with non-text characters like ( and ) and /, and have either 3 or 4
words...

The spaces between each column, instead of being a fixed number of
spaces each, were seemingly arbitrarily created... there may be 4
spaces between two columns or there may be 9, or 7 or who knows what,
and since they were all being treated as individual spaces instead of
tabs or something, I was having trouble splitting the output into
something that was easy to parse (at least in my mind it seemed that
way).

Anyway, that's that... and I do apologize if my original post was
confusing at all...

Cheers
Jeff
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
 wrote:
> On Wed, 07 Apr 2010 18:03:47 -0700, Patrick Maupin wrote:
> > BTW, although I find it annoying when people say "don't do that" when
> > "that" is a perfectly good thing to do, and although I also find it
> > annoying when people tell you what not to do without telling you what
> > *to* do,
>
> Grant did give a perfectly good solution.

Yeah, I noticed later and apologized for that.  What he gave will work
perfectly if the only data that changes the number of words is the
data the OP is looking for.  This may or may not be true.  I don't
know anything about the program generating the data, but I did notice
that the OP's attempt at an answer indicated that the OP felt (rightly
or wrongly) he needed to split on two or more spaces.

>
> > and although I find the regex solution to this problem to be
> > quite clean, the equivalent non-regex solution is not terrible, so I
> > will present it as well, for your viewing pleasure:
>
> > >>> [x for x in '# 1  Short offline       Completed without error
> >       00%'.split('  ') if x.strip()]
> > ['# 1', 'Short offline', ' Completed without error', ' 00%']
>
> This is one of the reasons we're so often suspicious of re solutions:
>
> >>> s = '# 1  Short offline       Completed without error       00%'
> >>> tre = Timer("re.split(' {2,}', s)",
>
> ... "import re; from __main__ import s")>>> tsplit = Timer("[x for x in 
> s.split('  ') if x.strip()]",
>
> ... "from __main__ import s")
>
> >>> re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
> True
>
> >>> min(tre.repeat(repeat=5))
> 6.1224789619445801
> >>> min(tsplit.repeat(repeat=5))
>
> 1.8338048458099365
>
> Even when they are correct and not unreadable line-noise, regexes tend to
> be slow. And they get worse as the size of the input increases:
>
> >>> s *= 1000
> >>> min(tre.repeat(repeat=5, number=1000))
> 2.3496899604797363
> >>> min(tsplit.repeat(repeat=5, number=1000))
> 0.41538596153259277
>
> >>> s *= 10
> >>> min(tre.repeat(repeat=5, number=1000))
> 23.739185094833374
> >>> min(tsplit.repeat(repeat=5, number=1000))
>
> 4.6444299221038818
>
> And this isn't even one of the pathological O(N**2) or O(2**N) regexes.
>
> Don't get me wrong -- regexes are a useful tool. But if your first
> instinct is to write a regex, you're doing it wrong.
>
>     [quote]
>     A related problem is Perl's over-reliance on regular expressions
>     that is exaggerated by advocating regex-based solution in almost
>     all O'Reilly books. The latter until recently were the most
>     authoritative source of published information about Perl.
>
>     While simple regular expression is a beautiful thing and can
>     simplify operations with string considerably, overcomplexity in
>     regular expressions is extremly dangerous: it cannot serve a basis
>     for serious, professional programming, it is fraught with pitfalls,
>     a big semantic mess as a result of outgrowing its primary purpose.
>     Diagnostic for errors in regular expressions is even weaker then
>     for the language itself and here many things are just go unnoticed.
>     [end quote]
>
> http://www.softpanorama.org/Scripting/Perlbook/Ch01/
> place_of_perl_among_other_lang.shtml
>
> Even Larry Wall has criticised Perl's regex culture:
>
> http://dev.perl.org/perl6/doc/design/apo/A05.html

Bravo!!! Good data, quotes, references, all good stuff!

I absolutely agree that regex shouldn't always be the first thing you
reach for, but I was reading way too much unsubstantiated "this is
bad.  Don't do it." on the subject recently.  In particular, when
people say "Don't use regex.  Use PyParsing!"  It may be good advice
in the right context, but it's a bit disingenuous not to mention that
PyParsing will use regex under the covers...

Regards,
Pat

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter inheritance mess?

2010-04-07 Thread ejetzer
On 5 avr, 22:32, Lie Ryan  wrote:
> On 04/06/10 02:38, ejetzer wrote:
>
>
>
> > On 5 avr, 12:36, ejetzer  wrote:
> >> For a school project, I'm trying to make a minimalist web browser, and
> >> I chose to use Tk as the rendering toolkit. I made my parser classes
> >> into Tkinter canvases, so that I would only have to call pack and
> >> mainloop functions in order to display the rendering. Right now, two
> >> bugs are affecting the program :
> >> 1) When running the full app¹, which fetches a document and then
> >> attempts to display it, I get a TclError :
> >>                  _tkinter.TclError: bad window path name "{Extensible
> >> Markup Language (XML) 1.0 (Fifth Edition)}"
> >> 2) When running only the parsing and rendering test², I get a big
> >> window to open, with nothing displayed. I am not quite familiar with
> >> Tk, so I have no idea of why it acts that way.
>
> >> 1: webbrowser.py
> >> 2: xmlparser.py
>
> > I just realized I haven't included the Google Code project url :
> >http://code.google.com/p/smally-browser/source/browse/#svn/trunk
>
> Check your indentation xmlparser.py in line 63 to 236, are they supposed
> to be correct?

Yes, these are functions that are used exclusively inside the feed
function, so I decided to restrict their namespace. I just realized it
could be confusing, so I placed them in global namsespace.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, Patrick Maupin  wrote:

> Sorry, my eyes completely missed your one-liner, so my criticism about
> not posting a solution was unwarranted.  I don't think you and I read
> the problem the same way (which is probably why I didn't notice your
> solution -- because it wasn't solving the problem I thought I saw).

No worries.

> When I saw "And I am interested in the string that appears in the
> third column, which changes as the test runs and then completes" I
> assumed that, not only could that string change, but so could the one
> before it.

If that's the case, my solution won't work right.

> I guess my base assumption that anything with words in it could
> change.  I was looking at the OP's attempt at a solution, and he
> obviously felt he needed to see two or more spaces as an item
> delimiter.

If the requirement is indeed two or more spaces as a delimiter with
spaces allowed in any field, then a regular expression split is
probably the best solution.

-- 
Grant



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Raymond Hettinger
[Raymond Hettinger]
> > If the two collections have unequal sizes, then both ways immediately
> > return unequal.

[Steven D'Aprano]
> Perhaps I'm misinterpreting what you are saying, but I can't confirm that
> behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?revision=78522&view=markup

if (Py_SIZE(vl) != Py_SIZE(wl) && (op == Py_EQ || op == Py_NE)) {
/* Shortcut: if the lengths differ, the lists differ */
PyObject *res;
if (op == Py_EQ)
res = Py_False;
else
res = Py_True;
Py_INCREF(res);
return res;
}

And see set_richcompare() in
http://svn.python.org/view/python/trunk/Objects/setobject.c?revision=78886&view=markup

case Py_EQ:
if (PySet_GET_SIZE(v) != PySet_GET_SIZE(w))
Py_RETURN_FALSE;
if (v->hash != -1  &&
((PySetObject *)w)->hash != -1 &&
v->hash != ((PySetObject *)w)->hash)
Py_RETURN_FALSE;
return set_issubset(v, w);


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
 wrote:

> This is one of the reasons we're so often suspicious of re solutions:
>
> >>> s = '# 1  Short offline       Completed without error       00%'
> >>> tre = Timer("re.split(' {2,}', s)",
>
> ... "import re; from __main__ import s")>>> tsplit = Timer("[x for x in 
> s.split('  ') if x.strip()]",
>
> ... "from __main__ import s")
>
> >>> re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
> True
>
> >>> min(tre.repeat(repeat=5))
> 6.1224789619445801
> >>> min(tsplit.repeat(repeat=5))
>
> 1.8338048458099365

I will confess that, in my zeal to defend re, I gave a simple one-
liner, rather than the more optimized version:

>>> from timeit import Timer
>>> s = '# 1  Short offline   Completed without error   00%'
>>> tre = Timer("splitter(s)",
... "import re; from __main__ import s; splitter =
re.compile(' {2,}').split")
>>> tsplit = Timer("[x for x in s.split('  ') if x.strip()]",
... "from __main__ import s")
>>> min(tre.repeat(repeat=5))
1.893190860748291
>>> min(tsplit.repeat(repeat=5))
2.0661051273345947

You're right that if you have an 800K byte string, re doesn't perform
as well as split, but the delta is only a few percent.

>>> s *= 1
>>> min(tre.repeat(repeat=5, number=1000))
15.331652164459229
>>> min(tsplit.repeat(repeat=5, number=1000))
14.596404075622559

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Q] raise exception with fake filename and linenumber

2010-04-07 Thread Gabriel Genellina

En Wed, 07 Apr 2010 17:23:22 -0300, kwatch  escribió:


Is it possible to raise exception with custom traceback to specify
file and line?
I'm creating a certain parser.
I want to report syntax error with the same format as other exception.
-
1: def parse(filename):
2: if something_is_wrong():
3: linenum = 123
4: raise Exception("syntax error on %s, line %s" % (filename,
linenum))
5:
6: parse('example.file')
-

my hope is:
-
Traceback (most recent call last):
  File "/tmp/parser.py", line 6, in 
parse('example.file')
  File "/tmp/parser.py", line 4, in parse
raise Exception("syntax error on %s, line %s" % (filename,
linenum))
  File "/tmp/example.file", line 123
foreach item in items   # wrong syntax line
Exception: syntax error
-


The built-in SyntaxError exception does what you want. Constructor  
parameters are undocumented, but they're as follows:


   raise SyntaxError("A descriptive error message", (filename, linenum,  
colnum, source_line))


colnum is used to place the ^ symbol (10 in this fake example). Output:

Traceback (most recent call last):
  File "1.py", line 9, in 
foo()
  File "1.py", line 7, in foo
raise SyntaxError("A descriptive error message", (filename, linenum,  
colnum, "this is line 123 in example.file"))

  File "example.file", line 123
this is line 123 in example.file
 ^
SyntaxError: A descriptive error message

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Profiling: Interpreting tottime

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 18:44:39 -0300, Nikolaus Rath   
escribió:



def check_s3_refcounts():
"""Check s3 object reference counts"""

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query("SELECT id, refcount FROM  
s3_objects"):


refcount2 = conn.get_val("SELECT COUNT(inode) FROM blocks WHERE  
s3key=?",

 (key,))
if refcount != refcount2:
log_error("S3 object %s has invalid refcount, setting from  
%d to %d",

  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute("UPDATE s3_objects SET refcount=? WHERE  
id=?",

 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269  
fsck.py:270(check_s3_refcounts)


So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Is the conn object implemented as a C extension? The profiler does not  
detect calls to C functions, I think.
You may be interested in this package by Robert Kern:  
http://pypi.python.org/pypi/line_profiler

"Line-by-line profiler.
line_profiler will profile the time individual lines of code take to  
execute."


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Profiling: Interpreting tottime

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 18:44:39 -0300, Nikolaus Rath   
escribió:



def check_s3_refcounts():
"""Check s3 object reference counts"""

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query("SELECT id, refcount FROM  
s3_objects"):


refcount2 = conn.get_val("SELECT COUNT(inode) FROM blocks WHERE  
s3key=?",

 (key,))
if refcount != refcount2:
log_error("S3 object %s has invalid refcount, setting from  
%d to %d",

  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute("UPDATE s3_objects SET refcount=? WHERE  
id=?",

 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269  
fsck.py:270(check_s3_refcounts)


So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Is the conn object implemented as a C extension? The profiler does not  
detect calls to C functions, I think.
You may be interested in this package by Robert Kern:  
http://pypi.python.org/pypi/line_profiler

"Line-by-line profiler.
line_profiler will profile the time individual lines of code take to  
execute."


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


The Regex Story

2010-04-07 Thread Lie Ryan
On 04/08/10 12:45, Patrick Maupin wrote:
> (And I got testy because of seeing other IMO unwarranted denigration
> of re on the list lately.)


Why am I seeing a lot of this pattern lately:

OP: Got problem with string
+- A: Suggested a regex-based solution
   +- B: Quoted "Some people ... regex ... two problems."

or

OP: Writes some regex, found problem
+- A: Quoted "Some people ... regex ... two problems."
   +- B: Supplied regex-based solution, clean one
  +- A: Suggested PyParsing (or similar)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
 wrote:

BTW, I don't know how you got 'True' here.

> >>> re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
> True

You must not have s set up to be the string given by the OP.  I just
realized there was an error in my non-regexp example, that actually
manifests itself with the test data:

>>> import re
>>> s = '# 1  Short offline   Completed without error   00%'
>>> re.split(' {2,}', s)
['# 1', 'Short offline', 'Completed without error', '00%']
>>> [x for x in s.split('  ') if x.strip()]
['# 1', 'Short offline', ' Completed without error', ' 00%']
>>> re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
False

To fix it requires something like:

[x.strip() for x in s.split('  ') if x.strip()]

or:

[x for x in [x.strip() for x in s.split('  ')] if x]

I haven't timed either one of these, but given that the broken
original one was slower than the simpler:

splitter = re.compile(' {2,}').split
splitter(s)

on strings of "normal" length, and given that nobody noticed this bug
right away (even though it was in the printout on my first message,
heh), I think that this shows that (here, let me qualify this
carefully), at least in some cases, the first regexp that comes to my
mind can be prettier, shorter, faster, less bug-prone, etc. than the
first non-regexp that comes to my mind...

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: order that destructors get called?

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 19:08:14 -0300, Brendan Miller   
escribió:


I'm used to C++ where destrcutors get called in reverse order of  
construction

like this:

{
Foo foo;
Bar bar;

// calls Bar::~Bar()
// calls Foo::~Foo()
}


That behavior is explicitly guaranteed by the C++ language. Python does  
not have such guarantees -- destructors may be delayed an arbitrary amount  
of time, or even not called at all.
In contrast, Python does have a `try/finally` construct, and the `with`  
statement. If Foo and Bar implement adequate __enter__ and __exit__  
methods, the above code would become:


with Foo() as foo:
  with Bar() as bar:
# do something

On older Python versions it is more verbose:

foo = Foo()
try:
  bar = Bar()
  try:
# do something
  finally:
bar.release_resources()
finally:
  foo.release_resources()

I'm writing a ctypes wrapper for some native code, and I need to manage  
some
memory. I'm wrapping the memory in a python class that deletes the  
underlying

 memory when the python class's reference count hits zero.


If the Python object lifetime is tied to certain lexical scope (like the  
foo,bar local variables in your C++ example) you may use `with` or  
`finally` as above.

If some other object with a longer lifetime keeps a reference, see below.


When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is  
released
*before* o is released. So when __del__ get called, delete_my_resource  
is now

None.


Implementing __del__ is not always a good idea; among other things, the  
garbage collector cannot break a cycle if any involved object contains a  
__del__ method. [1]
If you still want to implement __del__, keep a reference to  
delete_my_resource in the method itself:


 def __del__(self,
   delete_my_resource=delete_my_resource):
 delete_my_resource(self.res)

(and do the same with any global name that delete_my_resource itself may  
reference).


The best approach is to store a weak reference to foo and bar somewhere;  
weak references are notified right before the referent is destroyed. [4]


And last, if you want to release something when the program terminates,  
you may use the atexit module.


What I'm wondering is if there's any documented order that reference  
counts

get decremented when a module is released or when a program terminates.


Not much, as Stephen Hansen already told you; but see the comments in  
PyImport_Cleanup function in import.c [2] and in _PyModule_Clear in  
moduleobject.c [3]
Standard disclaimer: these undocumented details only apply to the current  
version of CPython, may change in future releases, and are not applicable  
at all to other implementations. So it's not a good idea to rely on this  
behavior.


[1] http://docs.python.org/reference/datamodel.html#object.__del__
[2] http://svn.python.org/view/python/trunk/Python/import.c?view=markup
[3]  
http://svn.python.org/view/python/trunk/Objects/moduleobject.c?view=markup

[4] http://docs.python.org/library/weakref.html

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Simple Cookie Script: Not recognising Cookie

2010-04-07 Thread Jimbo
Hi I have a simple Python program that assigns a cookie to a web user
when they open the script the 1st time(in an internet browser). If
they open the script a second time the script should display the line
" You have been here 2 times." , if they open the script agai it
should show on the webpage "You have been here 3 times" and so on.

But for some reason, my program is not assigning or recognising an
assigned cookie & outputing the line "You have been here x times". I
have gone over my code for like 2 hours now I cant figure out what is
going wrong??

Can you help me figure out whats wrong? I have my own cgi server that
just runs on my machine so its not that its the code to recognise/
assign a cookie

[code]#!/usr/bin/env python

import Cookie
import cgi
import os

HTML_template = """

  

  
  
 %s 
  

"""

def main():

# Web Client is new to the site so we need to assign a cookie to
them
cookie = Cookie.SimpleCookie()
cookie['SESSIONID'] = '1'
code = "No cookie exists. Welcome, this is your first visit."

if 'HTTP_COOKIE' in os.environ:
cookie = Cookie.SimpleCookie(os.environ['HTTP_COOKIE'])
# If Web client has been here before
if cookie.has_key('SESSIONID'):
cookie['SESSIONID'].value = int(cookie['SESSIONID'].value)
+1
code = "You have been here %s times." %
cookie['SESSIONID'].value
else:
cookie = Cookie.SimpleCookie()
cookie['SESSIONID'] = '1'
code = "I Have a cookie, but SESSIONID does not exist"

print "Content-Type: text/html\n"
print HTML_template % code


if __name__ == "__main__":
main()
[/code]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftp and python

2010-04-07 Thread John Nagle

Tim Chase wrote:

Matjaz Pfefferer wrote:

What would be the easiest way to copy files from one ftp
folder to another without downloading them to local system?


As best I can tell, this isn't well-supported by FTP[1] which doesn't 
seem to have a native "copy this file from server-location to 
server-location bypassing the client". There's a pair of RNFR/RNTO 
commands that allow you to rename (or perhaps move as well) a file which 
ftplib.FTP.rename() supports but it sounds like you want too copies.


   In theory, the FTP spec supports "three-way transfers", where the
source, destination, and control can all be on different machines.
But no modern implementation supports that.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: remote multiprocessing, shared object

2010-04-07 Thread Kushal Kumaran
On Thu, Apr 8, 2010 at 3:04 AM, Norm Matloff  wrote:
> Should be a simple question, but I can't seem to make it work from my
> understanding of the docs.
>
> I want to use the multiprocessing module with remote clients, accessing
> shared lists.  I gather one is supposed to use register(), but I don't
> see exactly how.  I'd like to have the clients read and write the shared
> list directly, not via some kind of get() and set() functions.  It's
> clear how to do this in a shared-memory setting, but how can one do it
> across a network, i.e. with serve_forever(), connect() etc.?
>
> Any help, especially with a concrete example, would be much appreciated.
> Thanks.
>

There's an example in the multiprocessing documentation.
http://docs.python.org/library/multiprocessing.html#using-a-remote-manager

It creates a shared queue, but it's easy to modify for lists.

For example, here's your shared list server:
from multiprocessing.managers import BaseManager
shared_list = []
class ListManager(BaseManager): pass
ListManager.register('get_list', callable=lambda:shared_list)
m = ListManager(address=('', 5), authkey='abracadabra')
s = m.get_server()
s.serve_forever()

A client that adds an element to your shared list:
import random
from multiprocessing.managers import BaseManager
class ListManager(BaseManager): pass
ListManager.register('get_list')
m = ListManager(address=('localhost', 5), authkey='abracadabra')
m.connect()
l = m.get_list()
l.append(random.random())

And a client that prints out the shared list:
from multiprocessing.managers import BaseManager
class ListManager(BaseManager): pass
ListManager.register('get_list')
m = ListManager(address=('localhost', 5), authkey='abracadabra')
m.connect()
l = m.get_list()
print str(l)

-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Kushal Kumaran
On Thu, Apr 8, 2010 at 3:10 AM, J  wrote:
> Can someone make me un-crazy?
>
> I have a bit of code that right now, looks like this:
>
> status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
>        status = re.sub(' (?= )(?=([^"]*"[^"]*")*[^"]*$)', ":",status)
>        print status
>
> Basically, it pulls the first actual line of data from the return you
> get when you use smartctl to look at a hard disk's selftest log.
>
> The raw data looks like this:
>
> # 1  Short offline       Completed without error       00%       679         -
>
> Unfortunately, all that whitespace is arbitrary single space
> characters.  And I am interested in the string that appears in the
> third column, which changes as the test runs and then completes.  So
> in the example, "Completed without error"
>
> The regex I have up there doesn't quite work, as it seems to be
> subbing EVERY space (or at least in instances of more than one space)
> to a ':' like this:
>
> # 1: Short offline:: Completed without error:: 00%:: 679 -
>
> Ultimately, what I'm trying to do is either replace any space that is
>> one space wiht a delimiter, then split the result into a list and
> get the third item.
>
> OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
> know.
>
> The end result should pull the whole string in the middle of that
> output line, and then I can use that to compare to a list of possible
> output strings to determine if the test is still running, has
> completed successfully, or failed.
>

Is there any particular reason you absolutely must extract the status
message?  If you already have a list of possible status messages, you
could just test which one of those is present in the line...

> Unfortunately, my google-fu fails right now, and my Regex powers were
> always rather weak anyway...
>
> So any ideas on what the best way to proceed with this would be?


-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: remote multiprocessing, shared object

2010-04-07 Thread Norm Matloff
Thanks very much, Kushal.

But it seems to me that it doesn't quite work.  After your first client
below creates l and calls append() on it, it would seem that one could
not then assign to it, e.g. do

   l[1] = 8

What I'd like is to write remote multiprocessing code just like threads
code (or for that matter, just like shared-memory multiprocessing code),
i.e. reading and writing shared globals.  Is this even possible?

Norm

On 2010-04-08, Kushal Kumaran  wrote:
> On Thu, Apr 8, 2010 at 3:04 AM, Norm Matloff  wrote:
>> Should be a simple question, but I can't seem to make it work from my
>> understanding of the docs.
>>
>> I want to use the multiprocessing module with remote clients, accessing
>> shared lists.  I gather one is supposed to use register(), but I don't
>> see exactly how.  I'd like to have the clients read and write the shared
>> list directly, not via some kind of get() and set() functions.  It's
>> clear how to do this in a shared-memory setting, but how can one do it
>> across a network, i.e. with serve_forever(), connect() etc.?
>>
>> Any help, especially with a concrete example, would be much appreciated.
>> Thanks.
>>
>
> There's an example in the multiprocessing documentation.
> http://docs.python.org/library/multiprocessing.html#using-a-remote-manager
>
> It creates a shared queue, but it's easy to modify for lists.
>
> For example, here's your shared list server:
> from multiprocessing.managers import BaseManager
> shared_list = []
> class ListManager(BaseManager): pass
> ListManager.register('get_list', callable=lambda:shared_list)
> m = ListManager(address=('', 5), authkey='abracadabra')
> s = m.get_server()
> s.serve_forever()
>
> A client that adds an element to your shared list:
> import random
> from multiprocessing.managers import BaseManager
> class ListManager(BaseManager): pass
> ListManager.register('get_list')
> m = ListManager(address=('localhost', 5), authkey='abracadabra')
> m.connect()
> l = m.get_list()
> l.append(random.random())
>
> And a client that prints out the shared list:
> from multiprocessing.managers import BaseManager
> class ListManager(BaseManager): pass
> ListManager.register('get_list')
> m = ListManager(address=('localhost', 5), authkey='abracadabra')
> m.connect()
> l = m.get_list()
> print str(l)
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: remote multiprocessing, shared object

2010-04-07 Thread Kushal Kumaran
On Thu, Apr 8, 2010 at 11:30 AM, Norm Matloff  wrote:
> Thanks very much, Kushal.
>
> But it seems to me that it doesn't quite work.  After your first client
> below creates l and calls append() on it, it would seem that one could
> not then assign to it, e.g. do
>
>   l[1] = 8
>
> What I'd like is to write remote multiprocessing code just like threads
> code (or for that matter, just like shared-memory multiprocessing code),
> i.e. reading and writing shared globals.  Is this even possible?
>

Try this server:
from multiprocessing.managers import BaseManager, ListProxy
shared_list = []
class ListManager(BaseManager): pass
ListManager.register('get_list', callable=lambda:shared_list,
proxytype=ListProxy)
m = ListManager(address=('', 5), authkey='abracadabra')
s = m.get_server()
s.serve_forever()

Just changed the proxy type appropriately.  See the managers.py file
in the multiprocessing source for details.

> 

-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list