Re: Problems of Symbol Congestion in Computer Languages

2011-03-13 Thread Robert Maas, http://tinyurl.com/uh3t
> From: rantingrick 
> Anyone with half a brain understands the metric system is far
> superior (on many levels) then any of the other units of
> measurement.

Anyone with a *whole* brain can see that you are mistaken. The
current "metric" system has two serious flaws:

It's based on powers of ten rather than powers of two, creating a
disconnect between our communication with computers (in decimal)
and how computers deal with numbers internally (in binary). Hence
the confusion newbies have as to why if you type into the REP loop
 (+ 1.1 2.2 3.3)
you get out
 6.604

The fundamental units are absurd national history artifacts such as
the French "metre" stick when maintained at a particular
temperature, and the Grenwich Observatory "second" as 1/(24*60*60)
of the time it took the Earth to rotate once relative to a
line-of-sight to the Sun under some circumstance long ago.

And now these have been more precisely defined as *exactly* some
inscrutable multiples of the wavelength and time-period of some
particular emission from some particular isotope under certain
particular conditions:
http://en.wikipedia.org/wiki/Metre#Standard_wavelength_of_krypton-86_emission
 (that direct definition replaced by the following:)
http://en.wikipedia.org/wiki/Metre#Speed_of_light
"The metre is the length of the path travelled by light in vacuum
 during a time interval of ^1/[299,792,458] of a second."
http://en.wikipedia.org/wiki/Second#Modern_measurements
"the duration of 9,192,631,770 periods of the radiation corresponding to
 the transition between the two hyperfine levels of the ground state of
 the caesium-133 atom"
Exercise to the reader: Combine those nine-decimal-digit and
ten-decimal-digit numbers appropriately to express exactly how many
wavelengths of the hyperfine transition equals one meter.
Hint: You either multiply or divide, hence if you just guess you
have one chance out of 3 of being correct.
-- 
http://mail.python.org/mailman/listinfo/python-list


Is there any python library that parse c++ source code statically

2011-03-13 Thread kuangye

Hi, all. I need to generate other programming language source code
from C++ source code for a project. To achieve this, the first step is
to "understand" the c++ source code at least in formally. Thus is
there any library to parse the C++ source code statically. So I can
developer on this library.

Since the C++ source code is rather simple and regular. I think i can
generate other language representation from C++ source code.


-- 
http://mail.python.org/mailman/listinfo/python-list


Is there any python library that parse c++ source code statically

2011-03-13 Thread kuangye

Hi, all. I need to generate other programming language source code
from C++ source code for a project. To achieve this, the first step is
to "understand" the c++ source code at least in formally. Thus is
there any library to parse the C++ source code statically. So I can
developer on this library.

Since the C++ source code is rather simple and regular. I think i can
generate other language representation from C++ source code.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there any python library that parse c++ source code statically

2011-03-13 Thread Francesco Bochicchio
On 13 Mar, 10:14, kuangye  wrote:
> Hi, all. I need to generate other programming language source code
> from C++ source code for a project. To achieve this, the first step is
> to "understand" the c++ source code at least in formally. Thus is
> there any library to parse the C++ source code statically. So I can
> developer on this library.
>
> Since the C++ source code is rather simple and regular. I think i can
> generate other language representation from C++ source code.


The problem is that C++ is a beast of a language and is not easy to
find full parsers for it.
I've never done it, but sometime I researched possible ways to do it.
The best idea I could come with
is doing it in 2 steps:

 - using gcc-xml ( http://www.gccxml.org/HTML/Index.html ) to generate
an xml representation of the code
 - using one of the many xml library for python to read the xml
equivalent of the code and then generate the equivalent
   code in other languages ( where you could use a template engine,
but I found that the python built-in string
   formatting libraries are quite up to the task ).

HTH

Ciao
---
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


OS X 10.6 pip compile error

2011-03-13 Thread mak
Hi all, how do i fix this?

$ sudo pip install lightblue
Downloading/unpacking lightblue
  Downloading lightblue-0.4.tar.gz (204Kb): 204Kb downloaded
  Running setup.py egg_info for package lightblue

Installing collected packages: lightblue
  Running setup.py install for lightblue

Build settings from command line:
DEPLOYMENT_LOCATION = YES
DSTROOT = /
INSTALL_PATH = /Library/Frameworks


=== BUILD NATIVE TARGET LightAquaBlue OF PROJECT LightAquaBlue
WITH CONFIGURATION Release ===
Check dependencies
GCC 4.2 is not compatible with the Mac OS X 10.4 SDK (file
BBBluetoothOBEXClient.m)
GCC 4.2 is not compatible with the Mac OS X 10.4 SDK (file
BBBluetoothOBEXClient.m)
** BUILD FAILED **

Successfully installed lightblue
Cleaning up...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-13 Thread Steven D'Aprano
On Sun, 13 Mar 2011 00:52:24 -0800, Robert Maas, http://tinyurl.com/uh3t
wrote:

> Exercise to the reader: Combine those nine-decimal-digit and
> ten-decimal-digit numbers appropriately to express exactly how many
> wavelengths of the hyperfine transition equals one meter. Hint: You
> either multiply or divide, hence if you just guess you have one chance
> out of 3 of being correct.


Neither. The question is nonsense. The hyperfine transition doesn't have 
a wavelength. It is the radiation emitted that has a wavelength. To work 
out the wavelength of the radiation doesn't require guessing, and it's 
not that complicated, it needs nothing more than basic maths.

Speed of light = 1 metre travelled in 1/299792458 of a second
If 9192631770 periods of the radiation takes 1 second, 1 period takes 
1/9192631770 of a second.

Combine that with the formula for wavelength:
Wavelength = speed of light * period
  = 299792458 m/s * 1/9192631770 s
  = 0.03261225571749406 metre


Your rant against the metric system is entertaining but silly. Any 
measuring system requires exact definitions of units, otherwise people 
will disagree on how many units a particular thing is. The imperial 
system is a good example of this: when you say something is "15 miles", 
do you mean UK statute miles, US miles, survey miles, international 
miles, nautical miles, or something else? The US and the UK agree that a 
mile is exactly 1,760 yards, but disagree on the size of a yard. And 
let's not get started on fluid ounces (a measurement of volume!) or 
gallons...

The metric system is defined to such a ridiculous level of precision 
because we have the technology, and the need, to measure things to that 
level of precision. Standards need to be based on something which is 
universal and unchanging. Anybody anywhere in the world can (in 
principle) determine their own standard one metre rule, or one second 
timepiece, without arguments about which Roman soldier's paces defines a 
yard, or which king's forearm is a cubit.


Follow-ups set to comp.lang.python.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Guido rethinking removal of cmp from sort method

2011-03-13 Thread Steven D'Aprano
The removal of cmp from the sort method of lists is probably the most 
disliked change in Python 3. On the python-dev mailing list at the 
moment, Guido is considering whether or not it was a mistake.

If anyone has any use-cases for sorting with a comparison function that 
either can't be written using a key function, or that perform really 
badly when done so, this would be a good time to speak up.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Purely historic question: VT200 text graphic programming

2011-03-13 Thread Anssi Saari
rzed  writes:

> Did you say "was"? The last time I did any programming on a VMS system 
> was ... about 5 1/2 hours ago. Our shop runs OpenVMS now, programs 
> mostly in C and BASIC. I've quietly insinuated Python into the mix 
> over the last few months, and that has helped my sanity considerably.

I suppose I meant VMS running on VAX. I'd guess you run OpenVMS on
Itanium these days?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there any python library that parse c++ source code statically

2011-03-13 Thread Stefan Behnel

Francesco Bochicchio, 13.03.2011 10:37:

On 13 Mar, 10:14, kuangye  wrote:

Hi, all. I need to generate other programming language source code
from C++ source code for a project. To achieve this, the first step is
to "understand" the c++ source code at least in formally. Thus is
there any library to parse the C++ source code statically. So I can
developer on this library.

Since the C++ source code is rather simple and regular. I think i can
generate other language representation from C++ source code.



The problem is that C++ is a beast of a language and is not easy to
find full parsers for it.
I've never done it, but sometime I researched possible ways to do it.
The best idea I could come with
is doing it in 2 steps:

  - using gcc-xml ( http://www.gccxml.org/HTML/Index.html ) to generate
an xml representation of the code
  - using one of the many xml library for python to read the xml
equivalent of the code and then generate the equivalent
code in other languages ( where you could use a template engine,
but I found that the python built-in string
formatting libraries are quite up to the task ).


I also heard that clang is supposed to the quite useful for this kind of 
undertaking.


Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is there any python library that parse c++ source code statically

2011-03-13 Thread Philip Semanchuk

On Mar 13, 2011, at 11:46 AM, Stefan Behnel wrote:

> Francesco Bochicchio, 13.03.2011 10:37:
>> On 13 Mar, 10:14, kuangye  wrote:
>>> Hi, all. I need to generate other programming language source code
>>> from C++ source code for a project. To achieve this, the first step is
>>> to "understand" the c++ source code at least in formally. Thus is
>>> there any library to parse the C++ source code statically. So I can
>>> developer on this library.
>>> 
>>> Since the C++ source code is rather simple and regular. I think i can
>>> generate other language representation from C++ source code.
>> 
>> 
>> The problem is that C++ is a beast of a language and is not easy to
>> find full parsers for it.
>> I've never done it, but sometime I researched possible ways to do it.
>> The best idea I could come with
>> is doing it in 2 steps:
>> 
>>  - using gcc-xml ( http://www.gccxml.org/HTML/Index.html ) to generate
>> an xml representation of the code
>>  - using one of the many xml library for python to read the xml
>> equivalent of the code and then generate the equivalent
>>code in other languages ( where you could use a template engine,
>> but I found that the python built-in string
>>formatting libraries are quite up to the task ).
> 
> I also heard that clang is supposed to the quite useful for this kind of 
> undertaking.

I was just discussing this with some folks here at PyCon. Clang has a library 
interface (libclang):
http://clang.llvm.org/doxygen/group__CINDEX.html

There's Python bindings for it; I'm sure the author would like some company =)

https://bitbucket.org/binet/py-clang/


Cheers
P

-- 
http://mail.python.org/mailman/listinfo/python-list


Hello Friends

2011-03-13 Thread Ashraf Ali
If someone want to know about Bollywood Hot actress and the biography,
Just
www.hotpics00.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Changing class name causes process to 'hang'

2011-03-13 Thread Tim Johnson
* Terry Reedy  [110312 17:45]:
> >## code below
> > import cgi
> > self.form = cgi.FieldStorage(keep_blank_values=1)
> >## /code
 
> And cgitools is a class therein?
  Code above is called with/from cgitools 
> >   Hmm! I'm unsure what you mean here, but
> 
> If the name 'cgitools' is used *somewhere*, not necessary in cgilib
> itself, other than in the class header itself, but the resulting
> NameError is *somehow* caught inside a retry loop, then your total
> application would exhibit the symptom you describe ('hanging'). 
  That is one thing that I have looked for - and so far I have found nothing.

> This could happen if there is a bare 'except:' statement (these
> are not recommended) that needs to be specific 'except
> SomeError:'.
 
> >The same happens if I 'alias' cgitools, as in
> >class CGI(cgitools):
> > pass
> 
> You mean you leave 'class cgitools()...' alone within cgilib and
> just add that? That would discredit the theory above. What if you
> use a different alias, like 'moretools'? Are you on an OS where
> 'cgi' and 'CGI' could get confused?
  I am on linux. And the 'alias' was introduced experimentally after
  the symptoms began. And `CgiTools' also results in
  non-termination.

> >It is as if some gremlin lives on my system and insists that I use
> >the name `cgitools' and only the name `cgitools'. I'm sure
> >this comes from a side effect somewhere in the process.
> >thanks for the reply
  One other thing I just realized:
  The process stops inside of a function call to another object
  method, if that method call is removed, the process teminates.
  :) I may have a solution later today, and will relay it to you if
  found. Must have coffee first.

  thanks for your interest, I really appreciate it.
-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [IronPython] IronPython 2.7 Now Available

2011-03-13 Thread Medcoff, Charles
Can someone on the list clarify differences or overlap between the tools 
included in this release, and the PTVS release?
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [IronPython] IronPython 2.7 Now Available

2011-03-13 Thread Dino Viehland
The PTVS release is really an extended version of the tools in IronPython 2.7.  
It adds support for CPython including debugging, profiling, etc...  while still 
supporting IronPython as well.  We'll likely either replace the tools 
distributed w/ IronPython with this version (maybe minus things like HPC 
support) or we'll pull the IpyTools out of the distribution and encourage 
people to go for the separate download.  No changes will likely happen until 
IronPython 3.x though as 2.7 is now out the door and it'd be a pretty 
significant change.

For the time being you'll need to choose one or the other - you can always 
choose to not by either not installing the IpyTools w/ the IronPython install 
and install the PTVS or you can just stick w/ the existing IronPython tools.

> -Original Message-
> From: [email protected] [mailto:users-
> [email protected]] On Behalf Of Medcoff, Charles
> Sent: Sunday, March 13, 2011 2:15 PM
> To: Discussion of IronPython; python-list
> Subject: Re: [IronPython] IronPython 2.7 Now Available
> 
> Can someone on the list clarify differences or overlap between the tools
> included in this release, and the PTVS release?
> ___
> Users mailing list
> [email protected]
> http://lists.ironpython.com/listinfo.cgi/users-ironpython.com
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [IronPython] IronPython 2.7 Now Available

2011-03-13 Thread Medcoff, Charles
Thanks that helps. I've tried the first option. Not doing much Python stuff at 
the moment, but I'll follow up if I experience any issues with this approach.

I'm very excited that both the language and tools support is forging ahead - 
thanks all.

-Original Message-
From: [email protected] 
[mailto:[email protected]] On Behalf Of Dino Viehland
Sent: Sunday, March 13, 2011 2:22 PM
To: Discussion of IronPython; python-list
Subject: Re: [IronPython] IronPython 2.7 Now Available

The PTVS release is really an extended version of the tools in IronPython 2.7.  
It adds support for CPython including debugging, profiling, etc...  while still 
supporting IronPython as well.  We'll likely either replace the tools 
distributed w/ IronPython with this version (maybe minus things like HPC 
support) or we'll pull the IpyTools out of the distribution and encourage 
people to go for the separate download.  No changes will likely happen until 
IronPython 3.x though as 2.7 is now out the door and it'd be a pretty 
significant change.

For the time being you'll need to choose one or the other - you can always 
choose to not by either not installing the IpyTools w/ the IronPython install 
and install the PTVS or you can just stick w/ the existing IronPython tools.

> -Original Message-
> From: [email protected] [mailto:users- 
> [email protected]] On Behalf Of Medcoff, Charles
> Sent: Sunday, March 13, 2011 2:15 PM
> To: Discussion of IronPython; python-list
> Subject: Re: [IronPython] IronPython 2.7 Now Available
> 
> Can someone on the list clarify differences or overlap between the 
> tools included in this release, and the PTVS release?
> ___
> Users mailing list
> [email protected]
> http://lists.ironpython.com/listinfo.cgi/users-ironpython.com
___
Users mailing list
[email protected]
http://lists.ironpython.com/listinfo.cgi/users-ironpython.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Purely historic question: VT200 text graphic programming

2011-03-13 Thread rzed
Anssi Saari  wrote in
news:[email protected]: 

> rzed  writes:
> 
>> Did you say "was"? The last time I did any programming on a VMS
>> system was ... about 5 1/2 hours ago. Our shop runs OpenVMS now,
>> programs mostly in C and BASIC. I've quietly insinuated Python
>> into the mix over the last few months, and that has helped my
>> sanity considerably. 
> 
> I suppose I meant VMS running on VAX. I'd guess you run OpenVMS
> on Itanium these days?
> 
> 

Actually, on Alphas. We do have every intention of moving away from 
VMS, we keep saying. So hardware upgrades are not under consideration.

-- 
rzed
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Changing class name causes process to 'hang'

2011-03-13 Thread Tim Johnson
* Tim Johnson  [110313 08:27]:
>   One other thing I just realized:
>   The process stops inside of a function call to another object
>   method, if that method call is removed, the process teminates.
>   :) I may have a solution later today, and will relay it to you if
>   found. Must have coffee first.
  I've had coffee, and also I've eaten a bit of crow. If I had no
  dignity to spare I might have dropped this thread after the last
  post, but I owe it to those who might come after me to explain
  what happened. If I have any excuse, it is that I wrote this code
  when I had about a month of python experience.
  So here we go... 
  The cleanup part is to write a logfile. The logfile is written by
  the write() method of a log object. The method was coded to accept
  any number of data types and execute based on the type from an
  if/elif/else code block. The cgi object was passed to the method.
  The original code follows:
  ## code begins
if type(args) == type({}): ## it's a dictionary
args['time_date_stamp'] = '%s%d' % 
(std.local_time(),std.randomize(8))
keys = args.keys()
keys.sort()
for key in keys:
outfile.write('\t%s: %s\n' % (key,args[key]))
elif type(args) == type(''):
outfile.write('%s\n%s\n' % (std.local_time(),args))
elif std.IsCgiObj(args):   ## dump the cgi object 
dump = args.getEnv('time_date_stamp=%s' % 
(std.local_time()))
for line in dump:
outfile.write('  %s\n' % line)
else : ## default = it's a list
if args:
outfile.write('time_date_stamp=%s\n' % 
(std.local_time()))
for arg in args:
outfile.write('  %s\n' % arg)
  ## /code ends
I did two obvious things wrong here:
First of all, std.IsCgiObj() returned false when I changed
the class name because std.IsCgiObj() tested for an explicit
match of 'cgitools' with the objects __class__.__name__ member.

Secondly, and worse, the default of the test block was an assumption
and I did not test the assumption. Bad, bad, very bad!
Therefore my code attempted to process the object as a list and down
the Rabit Hole we went. And I ended up with some *really* big
logfiles :).

Following is a tentative revision:
  ## code begins
elif 'instance' in (str(type(args))):   ## it's an object
if hasattr(args,'getEnv'): ## test for method
dump = args.getEnv('time_date_stamp=%s' % 
(std.local_time()))
for line in dump:
outfile.write('  %s\n' % line)
else :
erh.Report("object passed to logger.write() 
must have a `getEnv()' method" )
else : ## it's a list
if type(args) != []:  ## make no assumptions
erh.Report('List expected in default condition 
of logger.write()')
if args:
outfile.write('time_date_stamp=%s\n' % 
(std.local_time()))
for arg in args:
outfile.write('  %s\n' % arg)
  ## /code ends
  ## erh.Report() writes a messages and aborts process.
Of course, I could have problems with an object with a
malfunctioning getEnv() method, so I'll have to chew that one over
for a while.
I appreciate Terry's help. I'd welcome any other comments. I'm
also researching the use of __class__.__name__. One of my questions
is: can the implementation of an internal like __class__.__name__
change in the future?

-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


generator / iterator mystery

2011-03-13 Thread Dave Abrahams

Please consider:

>>> from itertools import chain
>>> def enum3(x): return ((x,n) for n in range(3))
... 
>>> list(enum3('a'))
[('a', 0), ('a', 1), ('a', 2)]


# Rewrite the same expression four different ways:

>>> list(chain(  enum3('a'), enum3('b'), enum3('c')  ))
[('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2), ('c', 0), ('c', 
1), ('c', 2)]

>>> list(chain(  *(enum3(x) for x in 'abc')  ))
[('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2), ('c', 0), ('c', 
1), ('c', 2)]


>>> list(chain(  
...(('a',n) for n in range(3)), 
...(('b',n) for n in range(3)), 
...(('c',n) for n in range(3))  ))
[('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2), ('c', 0), ('c', 
1), ('c', 2)]

>>> list(chain(  *(((x,n) for n in range(3)) for x in 'abc')  ))
[('c', 0), ('c', 1), ('c', 2), ('c', 0), ('c', 1), ('c', 2), ('c', 0), ('c', 
1), ('c', 2)]

Huh?  Can anyone explain why the last result is different?
(This is with Python 2.6)

Thanks in advance!

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Changing class name causes process to 'hang'

2011-03-13 Thread Terry Reedy

On 3/13/2011 3:17 PM, Tim Johnson wrote:

* Tim Johnson  [110313 08:27]:

   One other thing I just realized:
   The process stops inside of a function call to another object
   method, if that method call is removed, the process teminates.
   :) I may have a solution later today, and will relay it to you if
   found. Must have coffee first.

   I've had coffee, and also I've eaten a bit of crow. If I had no
   dignity to spare I might have dropped this thread after the last
   post, but I owe it to those who might come after me to explain
   what happened. If I have any excuse, it is that I wrote this code
   when I had about a month of python experience.


One measure of people is how big a mistake they are willing to make -- 
in public. I learned to always test or disclaim as 'not tested' posted 
code by posting undisclaimed bad code -- more than once.



   So here we go...
   The cleanup part is to write a logfile. The logfile is written by
   the write() method of a log object. The method was coded to accept
   any number of data types and execute based on the type from an
   if/elif/else code block. The cgi object was passed to the method.
   The original code follows:
   ## code begins
if type(args) == type({}): ## it's a dictionary
args['time_date_stamp'] = '%s%d' % 
(std.local_time(),std.randomize(8))
keys = args.keys()
keys.sort()
for key in keys:
outfile.write('\t%s: %s\n' % (key,args[key]))
elif type(args) == type(''):
outfile.write('%s\n%s\n' % (std.local_time(),args))
elif std.IsCgiObj(args):   ## dump the cgi object
dump = args.getEnv('time_date_stamp=%s' % 
(std.local_time()))
for line in dump:
outfile.write('  %s\n' % line)
else : ## default = it's a list
if args:
outfile.write('time_date_stamp=%s\n' % 
(std.local_time()))
for arg in args:
outfile.write('  %s\n' % arg)
   ## /code ends
I did two obvious things wrong here:
First of all, std.IsCgiObj() returned false when I changed
the class name because std.IsCgiObj() tested for an explicit
match of 'cgitools' with the objects __class__.__name__ member.


Your fundamental problem is that you changed the api of your module. 
When you do that, you have to review all code that uses that api -- or 
that depends on it. But yes, a module can use a string based on the api 
without actually importing the module. Bad idea I think. Better to 
import the module and make the dependency overt, so the failure is also.

The test should be (and should have been)

   elif isinstance(args, cgi.cgitools):

This would have and will fail with a loud AttributeError when cgitools 
is renamed.




Secondly, and worse, the default of the test block was an assumption
and I did not test the assumption. Bad, bad, very bad!
Therefore my code attempted to process the object as a list and down
the Rabit Hole we went. And I ended up with some *really* big
logfiles :).


Thank you for confirming that I was basically right that *somewhere* 
there had to be a test for the name 'cgitools' whose failure lead to a loop.



Following is a tentative revision:
   ## code begins
elif 'instance' in (str(type(args))):   ## it's an object


*Everything* is an object and therefore an instance of some class. What 
you are testing is whether the object is an instance of a classic class 
defined by a Python 2.x (classic) class statement. Rather fragile;-), as 
I just hinted.


class C(): pass

C()

# 2.7 prints
<__main__.C instance at 0x00C41FD0>
# 3.2 prints
<__main__.C object at 0x00F0A5F0>

class C(object): pass

C()
# 2.7 also prints
<__main__.C object at 0x00C71130>



if hasattr(args,'getEnv'): ## test for method
dump = args.getEnv('time_date_stamp=%s' % 
(std.local_time()))
for line in dump:
outfile.write('  %s\n' % line)
else :
erh.Report("object passed to logger.write() must 
have a `getEnv()' method" )
else : ## it's a list


 just test elif isinstance(args, list): ...

and put error message in separate else; clause


if type(args) != []:  ## make no assumptions
erh.Report('List expected in default condition 
of logger.write()')
if args:
outfile.write('time_date_stamp=%s\n' % 
(std.local_time()))
for arg in args:
outfile.write('  %s\n' % arg)
   ## /code ends

>## 

adodbapi integer parameters and MS Access

2011-03-13 Thread Joe
Here is my environment:

Windows 7 x64 SP1
Python 3.2
adodbapi 2.4.2
MS Access

Although the above environment is what I am currently using I have
encountered this same problem with Python 3.1.1.   It is not a problem
with Python 2.x.

The problem is as follows:

If you are using a select statement like:

select col_1, col_2 from table where (col_1 = ?)

and you are using the qmark parameter style

and you pass in an integer (for example:  (1, )) for the parameters,
you get the following error:

(-2147352567, 'Exception occurred.', (0, 'Microsoft OLE DB Provider
for ODBC Dri
vers', '[Microsoft][ODBC Microsoft Access Driver]Optional feature not
implemente
d ', None, 0, -2147217887), None)
Command:

select col_1, col_2 from table where (col_1 = ?)

Parameters:
[Name: p0, Dir.: Input, Type: adBigInt, Size: 0, Value: "1",
Precision: 0, Numer
icScale: 0]


If you run the same code using pyodbc or odbc in Python 3.2 (or 3.1.1)
it works fine so I know it is not a problem with the ODBC driver.

If you run the same code in Python 2.6.2 and adodbapi it also runs
fine.

Further investigation using different tables and columns seems to
conclude that:

adodbapi + Python 3.x + qmark parameters + parameters that are
integers produces this error.

col_1 in the database is defined as a number (long integer with 0
decimal positions).

If you convert the parameter to a string (str(1), ) then adodbapi
works in Python 3.2.

Is this a known bug?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generator / iterator mystery

2011-03-13 Thread Peter Otten
Dave Abrahams wrote:

 list(chain(  *(((x,n) for n in range(3)) for x in 'abc')  ))
> [('c', 0), ('c', 1), ('c', 2), ('c', 0), ('c', 1), ('c', 2), ('c', 0),
> [('c', 1), ('c', 2)]
> 
> Huh?  Can anyone explain why the last result is different?
> (This is with Python 2.6)

The *-operator is not lazy, so the outer generator will be exhausted before 
anything is passed to the chain() function. You can see what will be passed 
with

>>> generators = [((x, n) for n in range(3)) for x in "abc"]

x is defined in the enclosing scope, and at this point its value is

>>> x 
'c'

i. e. what was assigned to it in the last iteration of the list 
comprehension. Because of Python's late binding all generators in the list 
see this value:

>>> next(generators[0])
('c', 0)

>>> [next(g) for g in generators]
[('c', 1), ('c', 0), ('c', 0)]

Note that unlike list comps in 2.x generator expressions do not expose their 
iterating vars and therefore are a bit harder to inspect

>>> del x
>>> generators = listx, n) for n in range(3)) for x in "abc"))
>>> x
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'x' is not defined

...but the principle is the same:

>>> g = generators[0]
[snip a few dir(...) calls]
>>> g.gi_frame.f_locals
{'.0': , 'x': 'c'}

One way to address the problem is to make a separate closure for each of the 
inner generators which is what you achieved with your enum3() function; 
there the inner generator sees the local x of the current enum3() call. 
Another fix is to use chain.from_iterable(...) instead of chain(*...):

>>> list(chain.from_iterable(((x, n) for n in range(3)) for x in "abc"))
[('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2), ('c', 0), ('c', 
1), ('c', 2)]

Here the outer generator proceeds to the next x only when the inner 
generator is exhausted.

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generator / iterator mystery

2011-03-13 Thread Hrvoje Niksic
Dave Abrahams  writes:

 list(chain(  *(((x,n) for n in range(3)) for x in 'abc')  ))
> [('c', 0), ('c', 1), ('c', 2), ('c', 0), ('c', 1), ('c', 2), ('c', 0), ('c', 
> 1), ('c', 2)]
>
> Huh?  Can anyone explain why the last result is different?

list(chain(*EXPR)) is constructing a tuple out of EXPR.  In your case,
EXPR evaluates to a generator expression that yields generator
expressions iterated over by chain and then by list.  It is equivalent
to the following generator:

def outer():
for x in 'abc':
def inner():
for n in range(3):
yield x, n
yield inner()

list(chain(*outer()))
... the same result as above ...

The problem is that all the different instances of the inner() generator
refer to the same "x" variable, whose value has been changed to 'c' by
the time any of them is called.  The same gotcha is often seen in code
that creates closures in a loop, such as:

>>> fns = [(lambda: x+1) for x in range(3)]
>>> map(apply, fns)
[3, 3, 3]   # most people would expect [1, 2, 3]

In your case the closure is less explicit because it's being created by
a generator expression, but the principle is exactly the same.  The
classic fix for this problem is to move the closure creation into a
function, which forces a new cell to be allocated:

def adder(x):
return lambda: x+1

>>> fns = [adder(x) for x in range(3)]
>>> map(apply, fns)
[1, 2, 3]

This is why your enum3 variant works.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread bukzor
On Mar 12, 12:01 pm, "eryksun ()"  wrote:
> bukzor wrote:
> > This only works if you can edit the PYTHONPATH. With thousands of
> > users and dozens of groups each with their own custom environments,
> > this is a herculean effort.
>
> ... I don't think it's recommended to directly run a script that's part of a 
> package namespace. Experts on the subject can feel free to chime in.
>

I think this touches on my core problem. It's dead simple (and
natural) to use .py files simultaneously as both scripts and
libraries, as long as they're in a flat organization (all piled into a
single directory). Because of this, I never expected it to be so
difficult do do the same in a tiered organization. In fact the various
systems, syntaxes, and utilities for import seem to be conspiring to
disallow it. Is there a good reason for this?

Let's walk through it, to make it more concrete:
  1) we have a bunch of scripts in a directory
  2) we organize these scripts into a hierarchy of directories. This
works except for where scripts use code that exists in a different
directory.
  3) we move the re-used code causing issues in #2 to a central 'lib'
directory. For this centralized area to be found by our scipts, we
need to do one of the following
 a) install the lib to site-packages. This is unfriendly for
development, and impossible in a corporate environment where the IT-
blessed python installation has a read-only site-packages.
 b) put the lib's directory on the PYTHONPATH. This is somewhat
unfriendly for development, as the environment will either be
incorrect or unset sometimes. This goes double so for users.
 c) change the cwd to the lib's directory before running the tool.
This is heinous in terms of usability. Have you ever seen a tool that
requires you to 'cd /usr/bin' before running it?
 d) (eryksun's suggestion) create symlinks to a "loader" that
exist in the same directory as the lib. This effectively puts us back
to #1 (flat organization), with the added disadvantage of obfuscating
where the code actually exists.
 e) create custom boilerplate in each script that addresses the
issues in a-d. This seems to be the best practice at the moment...

Please correct me if I'm wrong. I'd like to be.

--Buck

--Buck
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Python Tools for Visual Studio from Microsoft - Free & Open Source

2011-03-13 Thread Brad Davies



> From: [email protected]
> To: [email protected]
> Subject: Fw: Python Tools for Visual Studio from Microsoft - Free & Open 
> Source
> Date: Thu, 10 Mar 2011 18:47:19 -0800
> 
> 
> - Original Message - 
> From: 
> To: "roland garros" ; 
> Sent: Thursday, March 10, 2011 2:03 AM
> Subject: Re: Python Tools for Visual Studio from Microsoft - Free & Open 
> Source
> 
> 
> > Roland,
> >
> >> http://pytools.codeplex.com
> >
> > Looks very impressive! Thank you for sharing this work.
> >
> > For others following this thread
> >
> > - this add-in to Visual Studio works with CPython 2.5 - 3.2 and is not
> > dependent on .NET or IronPython
> >
> > - this project also brings HPC (high performance computing) and MPI
> > support to CPython using the latest Microsoft API's for large scale data
> > and computing
> >
> > Regards,
> > Malcolm
> > -- 
> > http://mail.python.org/mailman/listinfo/python-list
> >
> > 
> 

Hi Roland, Malcolm and everyone,

As it turns out, I have Microsoft Visual Studio Express 2010 and when I tried 
to run the installer, it stopped and gave
me a message that "Visual Studio 2010 must be installed.  The free integrated 
shell can be downloaded."  I saw on the 
web page that a 'VS Shell' could be downloaded.  I am assuming that PythonTools 
was tested for Microsoft Visual Studio 
Professional 2010, not Express and that is the problem.  I don't want 2 Visual 
Studios on my system, is that what would happen
 if I downloaded VS Shell?  And where would VS Shell go?  Would it make itself 
the default of some things?

(I am emailing from a diff acct than usual)

Regards,

Patty


  -- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread bukzor
On Mar 12, 12:37 pm, Tim Johnson  wrote:
> * Phat Fly Alanna  [110312 07:22]:
>
>
>
>
>
>
>
> > We've been doing a fair amount of Python scripting, and now we have a
> > directory with almost a hundred loosely related scripts. It's
> > obviously time to organize this, but there's a problem. These scripts
> > import freely from each other and although code reuse is  generally a
> > good thing it makes it quite complicated to organize them into
> > directories.
>
> > There's a few things that you should know about our corporate
> > environment:
>
> > 1) I don't have access to the users' environment. Editing the
> > PYTHONPATH is out, unless it happens in the script itself.
> > 2) Users don't install things. Systems are expected to be *already*
> > installed and working, so setup.py is not a solution.
>
> > I'm quite willing to edit my import statements and do some minor
> > refactoring, but the solutions I see currently require me to divide
> > all  the code strictly between "user runnable scripts" and
> > "libraries", which isn't feasible, considering the amount of code.
>
> > Has anyone out there solved a similar problem? Are you happy with it?
>
>   Slightly similar - time doesn't permit details, but I used among
>   other things 4 methods that worked well for me:
>   1)'Site modules' with controlling constants,including paths
>   2)Wrappers for the __import__ function that enabled me to fine -
>   tune where I was importing from.
>   3)Cut down on the number of executables by using 'loaderers'.
>   4)I modified legacy code to take lessons from the MVC architecture,
>   and in fact my architecture following these changes could be
>   called 'LMVCC' for
>   loader
>   model
>   view
>   controller
>   config
>
>   I hope I've made some sense with these brief sentences.
> --
> Tim
> tim at johnsons-web.com or akwebsoft.comhttp://www.akwebsoft.com


Thanks Tim.

I believe I understand it. You create loaders in a flat organization,
in the same directory as your shared library, so that it's found
naturally. These loaders use custom code to find and run the "real"
scripts. This seems to be a combination of solutions d) and e) in my
above post.

This is a solution I hadn't considered.

It seems to have few disadvantages, although it does obfuscate where
to find the "real" code somewhat. It also has the implicit requirement
that all of your scripts can be categorized into a few top-level
categories. I'll have to think about whether this applies in my
situation...

Thanks again,
--Buck
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Changing class name causes process to 'hang'

2011-03-13 Thread Tim Johnson
* Terry Reedy  [110313 13:46]:
> On 3/13/2011 3:17 PM, Tim Johnson wrote:
> >* Tim Johnson  [110313 08:27]:
> 
> Your fundamental problem is that you changed the api of your module.
> When you do that, 
  No. I created a 'fork' of the original so that the 'fork' uses
  a different interface. The original API remains in legacy code
  unchanged...
> you have to review all code that uses that api --
> or that depends on it. 
  See above.

> The test should be (and should have been)
>elif isinstance(args, cgi.cgitools):
 Oh of course, forgot about isinstance. That is a good tip!
 
> This would have and will fail with a loud AttributeError when
> cgitools is renamed.
  Understood. 
  And AttributeError would be my friend and mentor nevertheless.

> Consider upgrading to 2.7 if you have not and using the logging module.
 :) I like my logging module, I believe it may have 'anticipated'
 the 2.7 module. And I can't count on my client's servers to host
 2.7 for a while.
> 
> Double double underscore names are 'special' rather than necessarily
> 'private'. These two are somewhat documented (3.2 Reference, 3.1,
> "__class__ is the instance’s class", "__name__ is the class name").
> So they will not change or disappear without notice. I expect them
> to be pretty stable.
 Good to hear. 
 Thanks Terry.
-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread Tim Johnson
* bukzor  [110313 15:48]:
> 
> Thanks Tim.
> 
> I believe I understand it. You create loaders in a flat organization,
> in the same directory as your shared library, so that it's found
  Not in the same directory as shared libraries. 

> naturally. These loaders use custom code to find and run the "real"
> scripts. This seems to be a combination of solutions d) and e) in my
> above post.
 In my case, the loader 
 1)Executes code that would otherwise be duplicated. 
 2)Loads modules (usually from a lower-level directory) based on
   keywords passed as a URL segment. 
> 
> It seems to have few disadvantages, although it does obfuscate where
> to find the "real" code somewhat. It also has the implicit requirement
> that all of your scripts can be categorized into a few top-level
> categories. 
 Correct. In my case that is desirable.
> I'll have to think about whether this applies in my
> situation...
  cheers

-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Grabbing user's agent and OS type

2011-03-13 Thread Ben Finney
Steven D'Aprano  writes:

> I think you give the user-agent string too much credit. Despite what
> some people think, including some browser developers, it's a free-form
> string and can contain anything the browser wants. There's no
> guarantee that fields will appear in a particular order, or even
> appear at all.

Much more importantly: due to a long history of abuse by web application
developers, the User-Agent field is often *deliberately faked* in the
web client to report false information in order to get past design flaws
in web applications.

Some of us do this in a systematic way to send a meaningful message to
the people misusing this field.

http://linuxmafia.com/faq/Web/user-agent-string.html>

> If you're doing feature detection by parsing the UA string, you're in
> a state of sin.

Word.

There are some plausible uses of the User-Agent field that are not
necessarily abusive, but you can thank the past and present hordes of
abusive applications for poisoning the well and making it a pretty
useless field now.

-- 
 \ “What you have become is the price you paid to get what you |
  `\ used to want.” —Mignon McLaughlin |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP for module naming conventions

2011-03-13 Thread Ben Finney
Tim Johnson  writes:

> I need to be better informed on naming conventions for modules. For
> instance, I need to create a new module and I want to make sure that
> the module name will not conflict with any future or current python
> system module names.

You'll never be able to make sure of that, and you would be needlessly
eliminating a whole lot of potentially useful names for your modules.

Have you read and understood PEP 328, which introduces the distinction
between relative and absolute imports? It's designed to avoid the
problem your describing http://www.python.org/dev/peps/pep-0328/>.

> There may be a PEP for this, if so, a URL to such a PEP would suffice
> for my inquiry. Also, if there is an index of PEPs, a link to such
> would also be appreciated.

PEP 0 is the index of all PEPs
http://www.python.org/dev/peps/pep-/>.

-- 
 \ “Pinky, are you pondering what I'm pondering?” “I think so, |
  `\   Brain, but if we have nothing to fear but fear itself, why does |
_o__) Elanore Roosevelt wear that spooky mask?” —_Pinky and The Brain_ |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Do you monitor your Python packages in inux distributions?

2011-03-13 Thread Ben Finney
[email protected] writes:

> […] I got a hit on an Ubuntu bug tracker about a SpamBayes bug. As it
> turns out, Ubuntu distributes an outdated (read: no longer maintained)
> version of SpamBayes. The bug had been fixed over three years ago in
> the current version. Had I known this I could probably have saved them
> some trouble, at least by suggesting that they upgrade.

If the maintainer of Ubuntu's spambayes knew it was a bug in the
upstream package, but failed to contact upstream (the SpamBayes team),
the maintainer of Ubuntu's spambayes isn't doing their job properly IMO.

> I have a question for you people who develop and maintain Python-based
> packages. How closely, if at all, do you monitor the bug trackers of
> Linux distributions (or Linux-like packaging systems like MacPorts)
> for activity related to your packages?

Not at all. If someone uses code, finds a bug in that code, thinks the
bug should be addressed in that code upstream, it's their responsibility
to report that using the contact details and/or bug tracker provided.

For OS distributions, that means the package maintainers are responsible
for reporting the bug upstream if it's suspected or determined to be a
bug in the upstream code base.

> How do you encourage such projects to push bug reports and/or fixes
> upstream to you?

Make the bug tracker and/or contact email address available at the same
location where the code itself is obtained. Be responsive to whomever
reports bugs using those channels. Track the bug reports effectively and
reliably.

-- 
 \ “I'm beginning to think that life is just one long Yoko Ono |
  `\   album; no rhyme or reason, just a lot of incoherent shrieks and |
_o__)  then it's over.” —Ian Wolff |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP for module naming conventions

2011-03-13 Thread Tim Johnson
* Ben Finney  [110313 17:15]:
> Tim Johnson  writes:
> 
> > I need to be better informed on naming conventions for modules. For
> > instance, I need to create a new module and I want to make sure that
> > the module name will not conflict with any future or current python
> > system module names.
> 
> You'll never be able to make sure of that, and you would be needlessly
> eliminating a whole lot of potentially useful names for your modules.
> 
> Have you read and understood PEP 328, which introduces the distinction
> between relative and absolute imports? It's designed to avoid the
> problem your describing http://www.python.org/dev/peps/pep-0328/>.
 Have read, don't fully understand, but it sounds like the dust
 hasn't settled yet. It will sink in.
 thanks
 tim

> > There may be a PEP for this, if so, a URL to such a PEP would suffice
> > for my inquiry. Also, if there is an index of PEPs, a link to such
> > would also be appreciated.
> 
> PEP 0 is the index of all PEPs
> http://www.python.org/dev/peps/pep-/>.
> 
> -- 
>  \ “Pinky, are you pondering what I'm pondering?” “I think so, |
>   `\   Brain, but if we have nothing to fear but fear itself, why does |
> _o__) Elanore Roosevelt wear that spooky mask?” —_Pinky and The Brain_ |
> Ben Finney
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Switching between Python releases under Windows

2011-03-13 Thread Tim Lesher
I've written a script to do just this, called switchpy.bat.

It's described here:

http://apipes.blogspot.com/2010/10/switchpy.html

Or you can just grab the latest version at:

https://bitbucket.org/tlesher/mpath/src/3edcff0e8197/switchpy.bat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread Terry Reedy

On 3/13/2011 7:27 PM, bukzor wrote:


I think this touches on my core problem. It's dead simple (and
natural) to use .py files simultaneously as both scripts and
libraries, as long as they're in a flat organization (all piled into a
single directory). Because of this, I never expected it to be so
difficult do do the same in a tiered organization. In fact the various
systems, syntaxes, and utilities for import seem to be conspiring to
disallow it. Is there a good reason for this?

Let's walk through it, to make it more concrete:
   1) we have a bunch of scripts in a directory
   2) we organize these scripts into a hierarchy of directories. This
works except for where scripts use code that exists in a different
directory.
   3) we move the re-used code causing issues in #2 to a central 'lib'
directory. For this centralized area to be found by our scipts, we
need to do one of the following
  a) install the lib to site-packages. This is unfriendly for
development,


I find it very friendly for development. I am testing in the same 
environment as users will have. I do intra-package imports with absolute 
imports. I normally run from IDLE edit windows, so I just tied running 
'python -m pack.sub.mod' from .../Python32 (WinXp, no PATH addition for 
Python) and it seems to work fine.


> impossible in a corporate environment where the IT-
> blessed python installation has a read-only site-packages.

My package is intended for free individuals, not straight-jacketed 
machines in asylums ;-).


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


ttk styles

2011-03-13 Thread Peter
Hi I'm struggling to get a good understanding of styles as used in
ttk. I have read the tutorial section on using styles but haven't been
able to solve this problem.

I am attempting to create a Checkbutton with the indicatoron=false
option. Using ttk the documentation is clear that you have to create a
custom style to achieve this behaviour. But the only "example" I have
been able to find on the Internet searches is written in Tcl i.e. here
is what I have found (quoted directly):

Here’s how you set it up: To achieve the effect of -indicatoron false,
create a new layout that doesn’t have an indicator:

style layout Toolbar.TCheckbutton {
  Toolbutton.border -children {
   Toolbutton.padding -children {
 Toolbutton.label
   }
  }
}

Then use style map and style default to control the border appearance:

style default Toolbar.TCheckbutton \
 -relief flat
style map Toolbar.TCheckbutton -relief {
  disabled flat
  selected sunken
  pressed sunken
  active raised

Hopefully somebody else in this group has "done" this and can post
their "solution"?

Thanks
Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-13 Thread Dave Abrahams
Steven D'Aprano  pearwood.info> writes:

> If anyone has any use-cases for sorting with a comparison function that 
> either can't be written using a key function, or that perform really 
> badly when done so, this would be a good time to speak up.

I think it's probably provable that there are no cases in the first category,
provided you're willing to do something sufficiently contorted.  However,
it also seems self-evident to me that many programmers will rightly chafe
at the idea of creating and tearing down a bunch of objects just to
compare things for sorting.  Think of the heap churn!  Even if it turns out
that Python 3 contains some magic implementation detail that makes it
efficient most of the time, it goes against a natural understanding of the 
computation model

2p for y'all.
-Dave

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Don't Want Visitor To See Nuttin'

2011-03-13 Thread alex23
Ian Kelly  wrote:
> Yow.  You're designing a Maya 2012 website to help some travel company
> bilk gullible people out of thousands of dollars?  I would be ashamed
> to have anything to do with this.

To be fair, he _does_ appear to be bilking the company out of
thousands of dollars by pretending to be a professional web developer,
so maybe there's a karmic balance angle here I hadn't considered :)

So hard to stay mad at anyone helping to immanentize the eschaton!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread eryksun ()
On Sunday, March 13, 2011 7:27:47 PM UTC-4, bukzor wrote:

>  e) create custom boilerplate in each script that addresses the
> issues in a-d. This seems to be the best practice at the moment...

The boilerplate should be pretty simple. For example, if the base path is the 
parent directory, then the following works for me:

import os.path
import sys
base = os.path.dirname(os.path.dirname(__file__))
sys.path.insert(0, base)

Of course, you need to have the __init__.py in each subdirectory.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-13 Thread Santoso Wijaya
I did not even realize such a change occurred in Python 3. I'm still
currently blissful in Python 2 land. I'd be concerned about the impact in
ported libraries (memory footprint? others?)...

~/santa


On Sun, Mar 13, 2011 at 1:35 PM, Dave Abrahams  wrote:

> Steven D'Aprano  pearwood.info> writes:
>
> > If anyone has any use-cases for sorting with a comparison function that
> > either can't be written using a key function, or that perform really
> > badly when done so, this would be a good time to speak up.
>
> I think it's probably provable that there are no cases in the first
> category,
> provided you're willing to do something sufficiently contorted.  However,
> it also seems self-evident to me that many programmers will rightly chafe
> at the idea of creating and tearing down a bunch of objects just to
> compare things for sorting.  Think of the heap churn!  Even if it turns out
> that Python 3 contains some magic implementation detail that makes it
> efficient most of the time, it goes against a natural understanding of the
> computation model
>
> 2p for y'all.
> -Dave
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread Frank Millman


"bukzor"  wrote


Let's walk through it, to make it more concrete:
  1) we have a bunch of scripts in a directory
  2) we organize these scripts into a hierarchy of directories. This
works except for where scripts use code that exists in a different
directory.
  3) we move the re-used code causing issues in #2 to a central 'lib'
directory. For this centralized area to be found by our scipts, we
need to do one of the following
 a) install the lib to site-packages. This is unfriendly for
development, and impossible in a corporate environment where the IT-
blessed python installation has a read-only site-packages.
 b) put the lib's directory on the PYTHONPATH. This is somewhat
unfriendly for development, as the environment will either be
incorrect or unset sometimes. This goes double so for users.
 c) change the cwd to the lib's directory before running the tool.
This is heinous in terms of usability. Have you ever seen a tool that
requires you to 'cd /usr/bin' before running it?
 d) (eryksun's suggestion) create symlinks to a "loader" that
exist in the same directory as the lib. This effectively puts us back
to #1 (flat organization), with the added disadvantage of obfuscating
where the code actually exists.
 e) create custom boilerplate in each script that addresses the
issues in a-d. This seems to be the best practice at the moment...


Disclaimers -

1. I don't know if this will solve your problem.
2. Even if it does, I don't know if this is good practice - I suspect not.

I put the following lines at the top of __init__.py in my package 
directory -

   import os
   import sys
   sys.path.insert(0, os.path.dirname(__file__))

This causes the package directory to be placed in the search path.

In your scripts you have to 'import' the package first, to ensure that these 
lines get executed.


My 2c

Frank Millman


--
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread bukzor
On Mar 13, 10:52 pm, "eryksun ()"  wrote:
> On Sunday, March 13, 2011 7:27:47 PM UTC-4, bukzor wrote:
> >      e) create custom boilerplate in each script that addresses the
> > issues in a-d. This seems to be the best practice at the moment...
>
> The boilerplate should be pretty simple. For example, if the base path is the 
> parent directory, then the following works for me:
>
> import os.path
> import sys
> base = os.path.dirname(os.path.dirname(__file__))
> sys.path.insert(0, base)
>
> Of course, you need to have the __init__.py in each subdirectory.

I've written this many times. It has issues. In fact, I've created a
library for this purpose, for the following reasons.

What if your script is compiled? You add an rstrip('c') I guess.
What if your script is optimized (python -O). You add an rstrip('o')
probably.
What if someone likes your script enough to simlink it elsewhere? You
add a realpath().
In some debuggers (pudb), __file__ is relative, so you need to add
abspath as well.
Since you're putting this in each of your scripts, it's wise to check
if the directory is already in sys.path before inserting.
We're polluting the global namespace a good bit now, so it would be
good to wrap this in a function.
To make our function more general (especially since we're going to be
copying it everywhere), we'd like to use relative paths.

In total, it looks like below after several years of trial, error, and
debugging. That's a fair amount of code to put in each script, and is
a maintenance headache whenever something needs to be changed. Now the
usual solution to that type of problem is to put it in a library, but
the purpose of this code is to give access to the libraries... What I
do right now is to symlink this library to all script directories to
allow them to bootstrap and gain access to libraries not in the local
directory.

I'd really love to delete this code forever. That's mainly why I'm
here.


#!/not/executable/python
"""
This module helps your code find your libraries more simply, easily,
reliably.

No dependencies apart from the standard library.
Tested in python version 2.3 through 2.6.
"""

DEBUG = False

#this is just enough code to give the module access to the libraries.
#any other shared code should go in a library

def normfile(fname):
"norm the filename to account for compiled and symlinked scripts"
from os.path import abspath, islink, realpath
if fname.endswith(".pyc") or fname.endswith(".pyo"):
fname = fname[:-1]
if fname.startswith('./'):
#hack to get pudb to work in a script that changes directory
from os import environ
fname = environ['PWD'] + '/' + fname
if islink(fname):
fname = realpath(fname)
return abspath(fname)

def importer(depth=1):
"get the importing script's __file__ variable"
from inspect import getframeinfo
frame = prev_frame(depth + 1)
if frame is None:
return '(none)'
else:
return normfile( getframeinfo(frame)[0] )

def prev_frame(depth=1):
"get the calling stack frame"
from inspect import currentframe
frame = currentframe()
depth += 1
try:
while depth:
frame = frame.f_back
depth -= 1
except (KeyError, AttributeError):
frame = None
return frame


def here(depth=1):
"give the path to the current script's directory"
from os.path import dirname
return dirname(importer(depth))

def use(mydir, pwd=None):
"""
add a directory to the Python module search path
relative paths are relative to the currently executing script,
unless specified otherwise
"""
from os.path import join, normpath
from sys import path

if not pwd:
pwd = here(2)
if not mydir.startswith("/"):
mydir = join(pwd, mydir)
mydir = normpath(mydir)
path.insert(1, mydir)
return mydir

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: organizing many python scripts, in a large corporate environment.

2011-03-13 Thread bukzor
On Mar 13, 6:50 pm, Terry Reedy  wrote:
> On 3/13/2011 7:27 PM, bukzor wrote:
>
>
>
>
>
>
>
>
>
> > I think this touches on my core problem. It's dead simple (and
> > natural) to use .py files simultaneously as both scripts and
> > libraries, as long as they're in a flat organization (all piled into a
> > single directory). Because of this, I never expected it to be so
> > difficult do do the same in a tiered organization. In fact the various
> > systems, syntaxes, and utilities for import seem to be conspiring to
> > disallow it. Is there a good reason for this?
>
> > Let's walk through it, to make it more concrete:
> >    1) we have a bunch of scripts in a directory
> >    2) we organize these scripts into a hierarchy of directories. This
> > works except for where scripts use code that exists in a different
> > directory.
> >    3) we move the re-used code causing issues in #2 to a central 'lib'
> > directory. For this centralized area to be found by our scipts, we
> > need to do one of the following
> >       a) install the lib to site-packages. This is unfriendly for
> > development,
>
> I find it very friendly for development. I am testing in the same
> environment as users will have. I do intra-package imports with absolute
> imports. I normally run from IDLE edit windows, so I just tied running
> 'python -m pack.sub.mod' from .../Python32 (WinXp, no PATH addition for
> Python) and it seems to work fine.
>
>  > impossible in a corporate environment where the IT-
>  > blessed python installation has a read-only site-packages.
>
> My package is intended for free individuals, not straight-jacketed
> machines in asylums ;-).
>
> --
> Terry Jan Reedy

virtualenv would let me install into site-packages without needing to
muck with IT. Perhaps I should look closer at that..

--Buck
-- 
http://mail.python.org/mailman/listinfo/python-list