Matthew Joiner wants to stay in touch on LinkedIn

2009-11-12 Thread Matthew Joiner
LinkedIn


Matthew Joiner requested to add you as a connection on LinkedIn:
--

Jaime,

I'd like to add you to my professional network on LinkedIn.

- Matthew Joiner

Accept invitation from Matthew Joiner
http://www.linkedin.com/e/I2LlXdLlWUhFABKmxVOlgGLlWUhFAfhMPPF/blk/I420651809_3/6lColZJrmZznQNdhjRQnOpBtn9QfmhBt71BoSd1p65Lr6lOfPdvej0UcjkSc38QiiZ9jj19qBdVkOYVdPgUd3wSdjwLrCBxbOYWrSlI/EML_comm_afe/

View invitation from Matthew Joiner
http://www.linkedin.com/e/I2LlXdLlWUhFABKmxVOlgGLlWUhFAfhMPPF/blk/I420651809_3/0PnPAMe34Rdz0Od4ALqnpPbOYWrSlI/svi/

--

Why might connecting with Matthew Joiner be a good idea?

Matthew Joiner's connections could be useful to you:
After accepting Matthew Joiner's invitation, check Matthew Joiner's connections 
to see who else you may know and who you might want an introduction to. 
Building these connections can create opportunities in the future.

 
--
(c) 2009, LinkedIn Corporation

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & Go

2009-11-12 Thread Michele Simionato
Well, Go looks like Python in the philosophy (it is a minimalist, keep
it simple language) more than in the syntax.
The one thing that I really like is the absence of classes and the
presence of interfaces
(I have been advocating something like that for years). I am dubious
about the absence of exceptions, though.
If I can attempt a prediction, I would say that if the language will
be adopted internally in Google it will make a difference; otherwise,
it will be rapidly forgotten.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to specify Python version in script?

2009-11-12 Thread Yinon Ehrlich
> Is there some way to specify at the very beginning of the script
> the acceptable range of Python versions?

sys.hexversion,
see http://mail.python.org/pipermail/python-list/2009-June/185939.html

-- Yinon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: advice needed for lazy evaluation mechanism

2009-11-12 Thread Steven D'Aprano
On Thu, 12 Nov 2009 08:53:57 +0100, Dieter Maurer wrote:

> Steven D'Aprano  writes on 10 Nov
> 2009 19:11:07 GMT:
>> ...
>> > So I am trying to restructure it using lazy evaluation.
>> 
>> Oh great, avoiding confusion with something even more confusing.
> 
> Lazy evaluation may be confusing if it is misused. But, it may be very
> clear and powerful if used appropriately.

I was referring to the specific example given, not the general concept of 
lazy evaluation.

I went on to give another example of simple, straightforward lazy 
evaluation: using properties for computed attributes.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
I guess why every programming language has some kind of a 'standard
library' built in within it.
In my view it must not be called as a 'library' at all. what it does
is like a 'bunch of built-in programs ready-made to do stuff'.

Lets see what a 'library' does:

1. offers books for customers
 1.1 lets user select a book by genre, etc
 1.2 lets user to use different books of same genre, etc
 1.3 lets user to use books by same author, etc for different genre

2. keeps track of all the books + their genre
 2.1 first knows what all books it has at present
 2.2 when new book comes it is added to the particular shelf sorted by
genre,author,edition, etc.
 2.3 when books become old they are kept separately for future
reference
 2.4 very old books can be sent to a museum/discarded

I guess no standard library does the minimum of this but wants to be
called a library.

As a python user I always wanted the standard library to have such
features so the user/developer decides to use what set of libraries he
want.

consider the libraries for 2.5 ,2.6, 3K are all available to the user,
the user selects what he wants with something like.

use library 2.5 or use library 2.6 etc.

The 2 main things that the library management interface has to do is
intra library management and inter library management.

intra library mgmt- consider books to be different libraries
(standard, commercial, private, hobby, etc)
inter library mgmt- consider books to be modules inside a library
( standard, commercial, private, hobby, etc)

if somehow we could accomplish this kind of mother of a all plugin/ad-
hoc system that is a real breakthrough.

Advantages:

1. new modules can be added to the stream quickly
2. let the user select what he want to do
3. modules (that interdepend on each other) can be packed into small
distribution and added to the stream quickly without waiting for new
releases
4. solution to problems like py 2.x and 3.x
5. users can be up to date
6. documentation becomes easy + elaborate to users
7. bug managing is easy too
8. more feed back
9. testing also becomes easy
10. many more , i don't know.. you have to find.

Python already has some thing like that __future__ stuff. but my
question is how many people know that? and how many use that? most of
them wait until old crust gets totally removed. that is bad for user
and python. that is why problems like py2.x py3.x originate. If there
is a newer book collection it must always be available at the library.
i must not go to another library to get that book.

These are just my views on the state of the standard libraries and to
make them state-of-the-art..! ;)





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & Go

2009-11-12 Thread Carl Banks
On Nov 11, 8:42 pm, Carl Banks  wrote:
> On Nov 11, 7:56 pm, geremy condra  wrote:
>
> > On Wed, Nov 11, 2009 at 9:00 PM, Mensanator  wrote:
> > > On Nov 11, 6:53 pm, kj  wrote:
> > >> I'm just learning about Google's latest: the GO (Go?) language.
> > >> (e.g.http://golang.orgorhttp://www.youtube.com/watch?v=rKnDgT73v8s).
> > >> There are some distinctly Pythonoid features to the syntax, such
> > >> as "import this_or_that",
>
> > > There's more to Python than import statements.
> > > In fact, this Go language is nothing like Python.
>
> > Actually, numerous analogies have been drawn between the two
> > both by whoever wrote the docs and the tech media, including
> > slashdot and techcrunch.
>
> Superficially it looks quite hideous, at least this sample does, but
> underneath the covers might be another question.  Javascript looks
> like Java but behaves more like Python.  Such might also be the case
> for Go.  I'll reserve judgment till I've looked at it, but it's
> advertised as natively supporting something I've always wanted in a
> static language: signatures (and, presumably, a culture to support
> them).


Ok, I've read up on the language and I've seen enough.
I, for one, won't be using it.

I don't think it has any possibility of gaining traction without
serious changes.  If Google decides to throw money at it and/or push
it internally (and I am skeptical Google's software engineers would
let themselved be browbeaten into using it) it'll be Lisp 2: Electric
Boogaloo.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Error in Windmill

2009-11-12 Thread Raji Seetharaman
Hi

Im learning Web scraping with Python from here
http://www.packtpub.com/article/web-scraping-with-python-part-2

>From the above link, the complete code is here http://pastebin.com/m10046dc6

When i run the program in the terminal i receive following errors

File "nasa.py", line 41, in 
test_scrape_iotd_gallery()
  File "nasa.py", line 24, in test_scrape_iotd_gallery
client = WindmillTestClient(__name__)
  File
"/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/authoring/__init__.py",
line 142, in __init__
method_proxy = windmill.tools.make_jsonrpc_client()
  File
"/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/tools/__init__.py",
line 35, in make_jsonrpc_client
url = urlparse(windmill.settings['TEST_URL'])
AttributeError: 'module' object has no attribute 'settings'

Suggest me

Thanks
Raji. S
-- 
http://mail.python.org/mailman/listinfo/python-list


Python C API and references

2009-11-12 Thread lallous

Hello,

Everytime I use PyObject_SetAttrString(obj, attr_name, py_val) and I don't 
need the reference to py_val I should decrement the reference after this 
call?


So for example:

PyObject *py_val = PyInt_FromLong(5)
PyObject_SetAttrString(py_obj, "val", py_val);
Py_DECREF(py_val)

Right?

If so, take sysmodule.c:

if (PyObject_SetAttrString(builtins, "_", Py_None) != 0)
return NULL;

Shouldn't they also call Py_DECREF(Py_None) ?

Same logic applies to PyDict_SetItemString() and the reference should be 
decrement after setting the item (ofcourse if the value is not needed).


--
Elias 


--
http://mail.python.org/mailman/listinfo/python-list


C api and checking for integers

2009-11-12 Thread lallous

Hello,

I am a little confused on how to check if a python variable is an integer or 
not.


Sometimes PyInt_Check() fails and PyLong_Check() succeeds.

How to properly check for integer values?

OTOH, I tried PyNumber_Check() and:

(1) The doc says: Returns 1 if the object o provides numeric protocols, and 
false otherwise. This function always succeeds.


What do they mean: "always succeeds" ?

(2) It seems PyNumber_check(py_val) returns true when passed an instance!

Please advise.

--
Elias 


--
http://mail.python.org/mailman/listinfo/python-list


query regarding file handling.

2009-11-12 Thread ankita dutta
hi all,

i have a file of 3x3 matrix of decimal numbers(tab separated). like this :

0.020.380.01
0.040.320.00
0.030.400.02

now i want to read 1 row and get the sum of a particular row. but when i am
trying with the following code, i am getting errors :

*code*:
"
ln1=open("A.txt","r+")# file "A.txt" contains my matrix
lines1=ln1.readlines()
n_1=[ ]

for p1 in range (0,len(lines1)):
f1=lines1[p1]
n_1.append((f1) )
print n_1
print  sum(n_1[0])

"

*output*:

['0.0200\t0.3877\t0.0011\n', '0.0040\t0.3292\t0.0001\n',
'0.0355\t0.4098\t0.0028\n', '0.0035\t0.3063\t0.0001\n',
'0.0080\t0.3397\t0.0002\n']

Traceback (most recent call last):
  File "A_1.py", line 20, in 
print sum(nodes_1[0])
  File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line
993, in sum
return _wrapit(a, 'sum', axis, dtype, out)
  File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line
37, in _wrapit
result = getattr(asarray(obj),method)(*args, **kwds)
TypeError: cannot perform reduce with flexible type


* what I think:*

as the list is in form of   '0.0200\t0.3877\t0.0011\n',  n_1[0]  takes
it as a whole string   which includes "\t" , i think thats why they are
giving error.

now how can i read only required numbers from this line
'0.0200\t0.3877\t0.0011\n'  and find their sum ?
can you kindly help me out how to properly code thing .



thank you,

regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & Go

2009-11-12 Thread Duncan Booth
Michele Simionato  wrote:

> I forgot to post a link to a nice analysis of Go:
> http://scienceblogs.com/goodmath/2009/11/googles_new_language_go.php
> 
Thanks for that link. I think it pretty well agrees with my first 
impressions of Go: there are some nice bits but there are also some bits 
they really should be so embarassed that they even considered.

The lack of any kind of error handling, whether exceptions or anything else 
is, I think, a killer. When you access a value out of a map you have a 
choice of syntax: one way gives you a boolean flag you can test to see 
whether or not the item was in the map, the other either gives you the 
value or crashes the program (yes, the documentation actually says 
'crash'). Both of these are wrong: the flag is wrong because it forces you 
to handle every lookup error immediately and at the same place in the code; 
the crash is wrong for obvious reasons.

The lack of inheritance is a bit weird, so far as I can tell you can have 
what is effectively a base class providing some concrete implementation but 
there are no virtual methods so no way to override anything.

What that article didn't mention, and what is possibly Go's real strong 
point is that it has built-in support for parallel processing. Again though 
the implementation looks weak: any attempt to read from a channel is either 
non-blocking or blocks forever. I guess you can probably implement timeouts 
by using a select to read from multiple channels, but as with accessing 
values from a map it doesn't sound like it will scale well to producing 
robust applications.

It has too many special cases: a lot of the builtin types can exist only as 
builtin types: if they weren't part of the language you couldn't implement 
an equivalent. e.g. A map has a key and value. The key can be any type 
which implements equality, but you can't implement equality tests for your 
own types so you cannot define additional key types.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: query regarding file handling.

2009-11-12 Thread Chris Rebert
On Thu, Nov 12, 2009 at 1:59 AM, ankita dutta  wrote:
> hi all,
>
> i have a file of 3x3 matrix of decimal numbers(tab separated). like this :
>
> 0.02    0.38    0.01
> 0.04    0.32    0.00
> 0.03    0.40    0.02
>
> now i want to read 1 row and get the sum of a particular row. but when i am
> trying with the following code, i am getting errors :

Try using the `csv` module, which despite its name, works on the
tab-delimited variant of the format as well:
http://docs.python.org/library/csv.html

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Choosing GUI Module for Python

2009-11-12 Thread Lorenzo Gatti
On Nov 11, 9:48 am, Lorenzo Gatti  wrote:

> On a more constructive note, I started to follow the instructions 
> athttp://www.pyside.org/docs/pyside/howto-build/index.html(which are
> vague and terse enough to be cross-platform) with Microsoft VC9
> Express.
> Hurdle 0: recompile Qt because the provided DLLs have hardcoded wrong
> paths that confuse CMake.
> How should Qt be configured? My first compilation attempt had to be
> aborted (and couldn't be resumed) after about 2 hours: trial and error
> at 1-2 builds per day could take weeks.

Update: I successfully compiled Qt (with WebKit disabled since it
gives link errors), as far as I can tell, and I'm now facing
apiextractor.

Hurdle 1a: convince CMake that I actually have Boost headers and
compiled libraries.
The Boost directory structure is confusing (compiled libraries in two
places), and CMake's script (FindBoost.cmake) is inconsistent (should
I set BOOST_INCLUDEDIR or BOOST_INCLUDE_DIR?), obsolete (last known
version is 1.38 rather than the requisite 1.40) and rather fishy (e.g.
hardcoded "c:\boost" paths).
Would the Cmake-based branch of Boost work better? Any trick or recipe
to try?
Hurdle 1b: the instructions don't mention a dependency from libxml2.

Lorenzo Gatti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python with echo

2009-11-12 Thread Diez B. Roggisch

hong zhang schrieb:

List,

I have a question of python using echo.

POWER = 14
return_value = os.system('echo 14 > /sys/class/net/wlan1/device/tx_power')

can assign 14 to tx_power

But 
return_value = os.system('echo $POWER > /sys/class/net/wlan1/device/tx_power')


return_value is 256 not 0. It cannot assign 14 to tx_power.


Because $POWER is an environment-variable, but POWER is a 
python-variable, which isn't magically becoming an environment variable.


There are various ways to achieve what you want, but IMHO you should 
ditch the whole echo-business as it's just needless in python itself:


with open("/sys/class/net/wlan1/device/tx_power", "w") as f:
f.write("%i" % POWER)

Diez
--
http://mail.python.org/mailman/listinfo/python-list


regex remove closest tag

2009-11-12 Thread S.Selvam
Hi all,


1) I need to remove the  tags which is just before the keyword(i.e
some_text2 ) excluding others.

2) input string may or may not contain  tags.

3) Sample input:

inputstr = """start some_text1 some_text2 keyword anything"""

4) I came up with the following regex,


p=re.compile(r'(?P.*?)(\s*keyword|\s*keyword)(?P.*)',re.DOTALL|re.I)
   s=p.search(inputstr)
  but second group matches both  tags,while  i need to match the recent
one only.

I would like to get your suggestions.

Note:

   If i leave group('good1') as greedy, then it matches both the  tag.
-- 
Yours,
S.Selvam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python C API and references

2009-11-12 Thread lallous

Hello Daniel,

Thanks for the reply.



Everytime I use PyObject_SetAttrString(obj, attr_name, py_val) and I 
don't

need the reference to py_val I should decrement the reference after this
call?


 It really depends on /how/ the object is created. If the
method used to create *py_val* increases the reference count
on the object and another function any other function is
used to increase the reference count, you should use Py_DECREF
or Py_XDECREF.



So for example:

PyObject *py_val = PyInt_FromLong(5)
PyObject_SetAttrString(py_obj, "val", py_val);
Py_DECREF(py_val)

Right?


 In this case is right. *PyInt_FromLong()* returns a new
reference: 'Return value: New reference.', which is increasing
the reference count and PyObject_SetAttrString does it twice,
then you have a reference count of two and you need to decrement
the initial reference counting of the object, or you will have
a memory leak.



[quote]
int PyObject_SetAttr(PyObject *o, PyObject *attr_name, PyObject *v)
Set the value of the attribute named attr_name, for object o, to the value 
v. Returns -1 on failure. This is the equivalent of the Python statement 
o.attr_name = v.

int PyObject_SetAttrString(PyObject *o, const char *attr_name, PyObject *v)
Set the value of the attribute named attr_name, for object o, to the value 
v. Returns -1 on failure. This is the equivalent of the Python statement 
o.attr_name = v.

[/quote]

Looking at the documentation, should I have understood that the passed value 
reference will be incremented and that I should decrement it if I don't need 
it?


Or I should have understood just because of the fact that whenever we have x 
= b (be it from the C api in a form of SetAttr()) then we should know that 
b's reference will be incremented. ?


Because, before this discussion I did not know I should decrease the 
reference after SetAttr()




If so, take sysmodule.c:

if (PyObject_SetAttrString(builtins, "_", Py_None) != 0)
return NULL;

Shouldn't they also call Py_DECREF(Py_None) ?


 No, I think that Py_None do not needs to decrease the
reference count...



None is an object like other objects. I think its reference must be taken 
into consideration too, for instance why there is the convenience macro: 
Py_RETURN_NONE ?


--
Elias 


--
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Diez B. Roggisch

Sriram Srinivasan schrieb:

I guess why every programming language has some kind of a 'standard
library' built in within it.
In my view it must not be called as a 'library' at all. what it does
is like a 'bunch of built-in programs ready-made to do stuff'.

Lets see what a 'library' does:

1. offers books for customers
 1.1 lets user select a book by genre, etc
 1.2 lets user to use different books of same genre, etc
 1.3 lets user to use books by same author, etc for different genre

2. keeps track of all the books + their genre
 2.1 first knows what all books it has at present
 2.2 when new book comes it is added to the particular shelf sorted by
genre,author,edition, etc.
 2.3 when books become old they are kept separately for future
reference
 2.4 very old books can be sent to a museum/discarded

I guess no standard library does the minimum of this but wants to be
called a library.

As a python user I always wanted the standard library to have such
features so the user/developer decides to use what set of libraries he
want.

consider the libraries for 2.5 ,2.6, 3K are all available to the user,
the user selects what he wants with something like.

use library 2.5 or use library 2.6 etc.

The 2 main things that the library management interface has to do is
intra library management and inter library management.

intra library mgmt- consider books to be different libraries
(standard, commercial, private, hobby, etc)
inter library mgmt- consider books to be modules inside a library
( standard, commercial, private, hobby, etc)

if somehow we could accomplish this kind of mother of a all plugin/ad-
hoc system that is a real breakthrough.

Advantages:

1. new modules can be added to the stream quickly
2. let the user select what he want to do
3. modules (that interdepend on each other) can be packed into small
distribution and added to the stream quickly without waiting for new
releases
4. solution to problems like py 2.x and 3.x
5. users can be up to date
6. documentation becomes easy + elaborate to users
7. bug managing is easy too
8. more feed back
9. testing also becomes easy
10. many more , i don't know.. you have to find.

Python already has some thing like that __future__ stuff. but my
question is how many people know that? and how many use that? most of
them wait until old crust gets totally removed. that is bad for user
and python. that is why problems like py2.x py3.x originate. If there
is a newer book collection it must always be available at the library.
i must not go to another library to get that book.


You are greatly oversimplifying things, and ignoring a *lot* of issues 
here. The reason for __future__ is that it can *break* things if new 
features were just introduced. Take the with-statement, reachable in 
python2.5 throug


  from __future__ import with_statement

It introduces a new keyword, which until then could be happily used as 
variable name.


So you can't arbirtarily mix code that is written with one or the other 
feature missing.


Then there is the issue of evolving C-APIs (or ABI), wich makes modules 
incompatible between interpreters.


And frankly, for most of your list I don't see how you think your 
"approach" reaches the stated advantages. Why is documentation becoming 
easier? Why bug managing? Why testing?


I'm sorry, but this isn't thought out in any way, it's just wishful 
thinking IMHO.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python C API and references

2009-11-12 Thread Mark Dickinson
On Nov 12, 9:23 am, "lallous"  wrote:
> Hello,
>
> Everytime I use PyObject_SetAttrString(obj, attr_name, py_val) and I don't
> need the reference to py_val I should decrement the reference after this
> call?

Not necessarily:  it depends where py_val came from.  I find the
'ownership' model described in the docs quite helpful:

http://docs.python.org/c-api/intro.html#reference-count-details

It's probably better to read the docs, but here's a short version:

If you create a reference to some Python object in a function in your
code, you then 'own' that reference, and you're responsible for
getting rid of it by the time the function exits.  There are various
ways this can happen:  you can return the reference, thereby
transferring ownership to the calling function; you can explicitly
discard the reference by calling Py_DECREF; or you can transfer
ownership by calling a function that 'steals' the reference (most
functions in the C-API don't steal references, but rather borrow them
for the duration of the function call).

>
> So for example:
>
> PyObject *py_val = PyInt_FromLong(5)
> PyObject_SetAttrString(py_obj, "val", py_val);
> Py_DECREF(py_val)
>
> Right?

Yes.  Here you've created the reference in the first place, so you
should Py_DECREF it before you exit.  Right after the
PyObject_SetAttrString call is a good place to do this.

>
> If so, take sysmodule.c:
>
>         if (PyObject_SetAttrString(builtins, "_", Py_None) != 0)
>                 return NULL;
>
> Shouldn't they also call Py_DECREF(Py_None) ?

No, I don't think so.  I assume you're looking at the sys_displayhook
function?  This function doesn't create new references to Py_None
(well, except when it's about to return Py_None to the caller), so at
this point it doesn't own any reference to Py_None:  it's not
responsible for decrementing the reference count.

> Same logic applies to PyDict_SetItemString()

Yes.


--
Mark
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
On Nov 12, 3:56 pm, "Diez B. Roggisch"  wrote:
> Sriram Srinivasan schrieb:
>
>
>
> > I guess why every programming language has some kind of a 'standard
> > library' built in within it.
> > In my view it must not be called as a 'library' at all. what it does
> > is like a 'bunch of built-in programs ready-made to do stuff'.
>
> > Lets see what a 'library' does:
>
> > 1. offers books for customers
> >  1.1 lets user select a book by genre, etc
> >  1.2 lets user to use different books of same genre, etc
> >  1.3 lets user to use books by same author, etc for different genre
>
> > 2. keeps track of all the books + their genre
> >  2.1 first knows what all books it has at present
> >  2.2 when new book comes it is added to the particular shelf sorted by
> > genre,author,edition, etc.
> >  2.3 when books become old they are kept separately for future
> > reference
> >  2.4 very old books can be sent to a museum/discarded
>
> > I guess no standard library does the minimum of this but wants to be
> > called a library.
>
> > As a python user I always wanted the standard library to have such
> > features so the user/developer decides to use what set of libraries he
> > want.
>
> > consider the libraries for 2.5 ,2.6, 3K are all available to the user,
> > the user selects what he wants with something like.
>
> > use library 2.5 or use library 2.6 etc.
>
> > The 2 main things that the library management interface has to do is
> > intra library management and inter library management.
>
> > intra library mgmt- consider books to be different libraries
> > (standard, commercial, private, hobby, etc)
> > inter library mgmt- consider books to be modules inside a library
> > ( standard, commercial, private, hobby, etc)
>
> > if somehow we could accomplish this kind of mother of a all plugin/ad-
> > hoc system that is a real breakthrough.
>
> > Advantages:
>
> > 1. new modules can be added to the stream quickly
> > 2. let the user select what he want to do
> > 3. modules (that interdepend on each other) can be packed into small
> > distribution and added to the stream quickly without waiting for new
> > releases
> > 4. solution to problems like py 2.x and 3.x
> > 5. users can be up to date
> > 6. documentation becomes easy + elaborate to users
> > 7. bug managing is easy too
> > 8. more feed back
> > 9. testing also becomes easy
> > 10. many more , i don't know.. you have to find.
>
> > Python already has some thing like that __future__ stuff. but my
> > question is how many people know that? and how many use that? most of
> > them wait until old crust gets totally removed. that is bad for user
> > and python. that is why problems like py2.x py3.x originate. If there
> > is a newer book collection it must always be available at the library.
> > i must not go to another library to get that book.
>
> You are greatly oversimplifying things, and ignoring a *lot* of issues
> here. The reason for __future__ is that it can *break* things if new
> features were just introduced. Take the with-statement, reachable in
> python2.5 throug
>
>    from __future__ import with_statement
>
> It introduces a new keyword, which until then could be happily used as
> variable name.
>
> So you can't arbirtarily mix code that is written with one or the other
> feature missing.
>
> Then there is the issue of evolving C-APIs (or ABI), wich makes modules
> incompatible between interpreters.
>
> And frankly, for most of your list I don't see how you think your
> "approach" reaches the stated advantages. Why is documentation becoming
> easier? Why bug managing? Why testing?
>
> I'm sorry, but this isn't thought out in any way, it's just wishful
> thinking IMHO.
>
> Diez

I don't know if you have used Dev-C++. It has a 'package management' mechanism for the
standard libraries.
please see the  webpage where all the packaged
libraries are stored.

In python we have the PyPI which is equivalent to the http://devpacks.org
but in PyPI the packages are mostly user made applications.
What I want is similar to PyPI but for the python standard libraries,
so that they (libraries) are as add-on as possible.
check this out too.. 


I guess you understand what I am thinking... and do pardon my english
too..

--

Regards,
Sriram.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Diez B. Roggisch

Sriram Srinivasan schrieb:

On Nov 12, 3:56 pm, "Diez B. Roggisch"  wrote:

Sriram Srinivasan schrieb:




I guess why every programming language has some kind of a 'standard
library' built in within it.
In my view it must not be called as a 'library' at all. what it does
is like a 'bunch of built-in programs ready-made to do stuff'.
Lets see what a 'library' does:
1. offers books for customers
 1.1 lets user select a book by genre, etc
 1.2 lets user to use different books of same genre, etc
 1.3 lets user to use books by same author, etc for different genre
2. keeps track of all the books + their genre
 2.1 first knows what all books it has at present
 2.2 when new book comes it is added to the particular shelf sorted by
genre,author,edition, etc.
 2.3 when books become old they are kept separately for future
reference
 2.4 very old books can be sent to a museum/discarded
I guess no standard library does the minimum of this but wants to be
called a library.
As a python user I always wanted the standard library to have such
features so the user/developer decides to use what set of libraries he
want.
consider the libraries for 2.5 ,2.6, 3K are all available to the user,
the user selects what he wants with something like.
use library 2.5 or use library 2.6 etc.
The 2 main things that the library management interface has to do is
intra library management and inter library management.
intra library mgmt- consider books to be different libraries
(standard, commercial, private, hobby, etc)
inter library mgmt- consider books to be modules inside a library
( standard, commercial, private, hobby, etc)
if somehow we could accomplish this kind of mother of a all plugin/ad-
hoc system that is a real breakthrough.
Advantages:
1. new modules can be added to the stream quickly
2. let the user select what he want to do
3. modules (that interdepend on each other) can be packed into small
distribution and added to the stream quickly without waiting for new
releases
4. solution to problems like py 2.x and 3.x
5. users can be up to date
6. documentation becomes easy + elaborate to users
7. bug managing is easy too
8. more feed back
9. testing also becomes easy
10. many more , i don't know.. you have to find.
Python already has some thing like that __future__ stuff. but my
question is how many people know that? and how many use that? most of
them wait until old crust gets totally removed. that is bad for user
and python. that is why problems like py2.x py3.x originate. If there
is a newer book collection it must always be available at the library.
i must not go to another library to get that book.

You are greatly oversimplifying things, and ignoring a *lot* of issues
here. The reason for __future__ is that it can *break* things if new
features were just introduced. Take the with-statement, reachable in
python2.5 throug

   from __future__ import with_statement

It introduces a new keyword, which until then could be happily used as
variable name.

So you can't arbirtarily mix code that is written with one or the other
feature missing.

Then there is the issue of evolving C-APIs (or ABI), wich makes modules
incompatible between interpreters.

And frankly, for most of your list I don't see how you think your
"approach" reaches the stated advantages. Why is documentation becoming
easier? Why bug managing? Why testing?

I'm sorry, but this isn't thought out in any way, it's just wishful
thinking IMHO.

Diez


I don't know if you have used Dev-C++. It has a 'package management' mechanism for the
standard libraries.


No, it hasn't. It has packages for *additional* libraries. C++ has only 
a very dim concept of standard-libraries. And those usually ship with 
the compiler, as standard-libraries shipped with python.



please see the  webpage where all the packaged
libraries are stored.

In python we have the PyPI which is equivalent to the http://devpacks.org
but in PyPI the packages are mostly user made applications.
What I want is similar to PyPI but for the python standard libraries,
so that they (libraries) are as add-on as possible.
check this out too.. 


Why do you want that? What is the actual issue you are addressing? 
Python's strength is that it comes with a certain set of libs bundled. 
Artificially to unbundle them, forcing users to install them separatly 
makes little sense, if any. Sure, updating a bug might be *slightly* 
easier, but then the standard-libraries are well-tested and decoupling 
their evolving from the releases of the core interpreter opens up a 
whole can of worms of problems.


There is a tradeoff for both approaches, and one can argue if the 
current balance is the right one - but if so, then over concrete 
packages, not over the system in general.



Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
On Nov 12, 3:56 pm, "Diez B. Roggisch"  wrote:
> Sriram Srinivasan schrieb:
>
>
>
> > I guess why every programming language has some kind of a 'standard
> > library' built in within it.
> > In my view it must not be called as a 'library' at all. what it does
> > is like a 'bunch of built-in programs ready-made to do stuff'.
>
> > Lets see what a 'library' does:
>
> > 1. offers books for customers
> >  1.1 lets user select a book by genre, etc
> >  1.2 lets user to use different books of same genre, etc
> >  1.3 lets user to use books by same author, etc for different genre
>
> > 2. keeps track of all the books + their genre
> >  2.1 first knows what all books it has at present
> >  2.2 when new book comes it is added to the particular shelf sorted by
> > genre,author,edition, etc.
> >  2.3 when books become old they are kept separately for future
> > reference
> >  2.4 very old books can be sent to a museum/discarded
>
> > I guess no standard library does the minimum of this but wants to be
> > called a library.
>
> > As a python user I always wanted the standard library to have such
> > features so the user/developer decides to use what set of libraries he
> > want.
>
> > consider the libraries for 2.5 ,2.6, 3K are all available to the user,
> > the user selects what he wants with something like.
>
> > use library 2.5 or use library 2.6 etc.
>
> > The 2 main things that the library management interface has to do is
> > intra library management and inter library management.
>
> > intra library mgmt- consider books to be different libraries
> > (standard, commercial, private, hobby, etc)
> > inter library mgmt- consider books to be modules inside a library
> > ( standard, commercial, private, hobby, etc)
>
> > if somehow we could accomplish this kind of mother of a all plugin/ad-
> > hoc system that is a real breakthrough.
>
> > Advantages:
>
> > 1. new modules can be added to the stream quickly
> > 2. let the user select what he want to do
> > 3. modules (that interdepend on each other) can be packed into small
> > distribution and added to the stream quickly without waiting for new
> > releases
> > 4. solution to problems like py 2.x and 3.x
> > 5. users can be up to date
> > 6. documentation becomes easy + elaborate to users
> > 7. bug managing is easy too
> > 8. more feed back
> > 9. testing also becomes easy
> > 10. many more , i don't know.. you have to find.
>
> > Python already has some thing like that __future__ stuff. but my
> > question is how many people know that? and how many use that? most of
> > them wait until old crust gets totally removed. that is bad for user
> > and python. that is why problems like py2.x py3.x originate. If there
> > is a newer book collection it must always be available at the library.
> > i must not go to another library to get that book.
>
> You are greatly oversimplifying things, and ignoring a *lot* of issues
> here. The reason for __future__ is that it can *break* things if new
> features were just introduced. Take the with-statement, reachable in
> python2.5 throug
>
>    from __future__ import with_statement
>
> It introduces a new keyword, which until then could be happily used as
> variable name.
>
> So you can't arbirtarily mix code that is written with one or the other
> feature missing.
>
> Then there is the issue of evolving C-APIs (or ABI), wich makes modules
> incompatible between interpreters.
>
> And frankly, for most of your list I don't see how you think your
> "approach" reaches the stated advantages. Why is documentation becoming
> easier? Why bug managing? Why testing?
>
> I'm sorry, but this isn't thought out in any way, it's just wishful
> thinking IMHO.
>
> Diez

__future__ as you said helps in stop breaking things. what i suggest
that if both the libraries (in yr case-with is defined in one and
other with is not defined is a simple one) like py2.x and py3.x exists
and i want to use 3.x features in the morning then in the evening i
want to use 2.x or something like that just add on the library and i
get the require functionality
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: comparing alternatives to py2exe

2009-11-12 Thread Jonathan Hartley
On Nov 10, 1:34 pm, Philip Semanchuk  wrote:
> On Nov 9, 2009, at 9:16 PM, Gabriel Genellina wrote:
>
>
>
> > En Fri, 06 Nov 2009 17:00:17 -0300, Philip Semanchuk  > > escribió:
> >> On Nov 3, 2009, at 10:58 AM, Jonathan Hartley wrote:
>
> >>> Recently I put together this incomplete comparison chart in an  
> >>> attempt
> >>> to choose between the different alternatives to py2exe:
>
> >>>http://spreadsheets.google.com/pub?key=tZ42hjaRunvkObFq0bKxVdg&output...
>
> >> I was interested in py2exe because we'd like to provide a one  
> >> download, one click install experience for our Windows users. I  
> >> think a lot of people are interested in py2exe for the same reason.  
> >> Well, one thing that I came across in my travels was the fact that  
> >> distutils can create MSIs. Like py2exe, MSIs provide a one  
> >> download, one click install experience under Windows and therefore  
> >> might be a replacement for py2exe.
>
> > But py2exe and .msi are complementary, not a replacement.
> > py2exe collects in onedirectory(or even in one file in some cases)  
> > all the pieces necesary to run your application. That is,Python 
> > itself + your application code + all referenced libraries + other  
> > required pieces.
> > The resulting files must beinstalledin the client machine; you  
> > either build a .msi file (a database for the Microsoft Installer) or  
> > use any other installer (like InnoSetup, the one I like).
>
> >> For me, the following command was sufficient to create an msi,  
> >> although it only worked under Windows (not under Linux or OS X):
> >>pythonsetup.py bdist_msi
>
> >> The resulting MSI worked just fine in my extensive testing (read: I  
> >> tried it on one machine).
>
> > The resulting .msi file requiresPythonalreadyinstalledon the  
> > target machine, if I'm not mistaken. The whole point of py2exe is to  
> > avoid requiring a previousPythoninstall.
>
> You're right; the MSI I created doesn't include prerequisites. It  
> packaged up our app, that's it. To be fair to MSIs, they might be  
> capable of including prerequisites, the app, and the kitchen sink. But  
> I don't thinkPython'screation process through distutils makes that  
> possible.
>
> That's why I suggested MSIs belong alongside RPM/DEB in the chart (if  
> they're to be included at all).
>
> I wouldn't say that the whole point of py2exe is to bundlePython.  
> That's certainly a big benefit, but another benefit is that it can  
> bundle an app into a one-click download. That's why we were interested  
> in it.
>
>
>
> >> It seems, then, that creating an MSI is even within the reach of  
> >> someone like me who spends very little time in Windows-land, so it  
> >> might be worth a column on your chart alongside rpm/deb.
>
> > As said inhttp://wiki.python.org/moin/DistributionUtilitiesthe  
> > easiest way is to use py2exe + InnoSetup.
>
> Easiest for you. =) The list of packages and modules that might  
> require special treatment is almost a perfect superset of the modules  
> we're using in our 
> application:http://www.py2exe.org/index.cgi/WorkingWithVariousPackagesAndModules
>
> py2exe looks great, but it remains to be seen if it's the easiest way  
> to solve our problem. The MSI isn't nearly as nice for the end user,  
> but we created it using only thePythonstandard library and our  
> existing setup.py. Simplicity has value.
>
> Cheers
> Philip

Hey Philip and Gabriel,
Interesting to hear your respective perspectives - you've given me
much to ponder and to read about. Personally I'm keen to find a method
that doesn't require the end-user to have to manually install (the
correct version of) Python separately from my app, so I think that
rules out the distutils-generated MSI for me. I can see it has value
for others though.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Diez B. Roggisch

Sriram Srinivasan schrieb:

On Nov 12, 3:56 pm, "Diez B. Roggisch"  wrote:

Sriram Srinivasan schrieb:




I guess why every programming language has some kind of a 'standard
library' built in within it.
In my view it must not be called as a 'library' at all. what it does
is like a 'bunch of built-in programs ready-made to do stuff'.
Lets see what a 'library' does:
1. offers books for customers
 1.1 lets user select a book by genre, etc
 1.2 lets user to use different books of same genre, etc
 1.3 lets user to use books by same author, etc for different genre
2. keeps track of all the books + their genre
 2.1 first knows what all books it has at present
 2.2 when new book comes it is added to the particular shelf sorted by
genre,author,edition, etc.
 2.3 when books become old they are kept separately for future
reference
 2.4 very old books can be sent to a museum/discarded
I guess no standard library does the minimum of this but wants to be
called a library.
As a python user I always wanted the standard library to have such
features so the user/developer decides to use what set of libraries he
want.
consider the libraries for 2.5 ,2.6, 3K are all available to the user,
the user selects what he wants with something like.
use library 2.5 or use library 2.6 etc.
The 2 main things that the library management interface has to do is
intra library management and inter library management.
intra library mgmt- consider books to be different libraries
(standard, commercial, private, hobby, etc)
inter library mgmt- consider books to be modules inside a library
( standard, commercial, private, hobby, etc)
if somehow we could accomplish this kind of mother of a all plugin/ad-
hoc system that is a real breakthrough.
Advantages:
1. new modules can be added to the stream quickly
2. let the user select what he want to do
3. modules (that interdepend on each other) can be packed into small
distribution and added to the stream quickly without waiting for new
releases
4. solution to problems like py 2.x and 3.x
5. users can be up to date
6. documentation becomes easy + elaborate to users
7. bug managing is easy too
8. more feed back
9. testing also becomes easy
10. many more , i don't know.. you have to find.
Python already has some thing like that __future__ stuff. but my
question is how many people know that? and how many use that? most of
them wait until old crust gets totally removed. that is bad for user
and python. that is why problems like py2.x py3.x originate. If there
is a newer book collection it must always be available at the library.
i must not go to another library to get that book.

You are greatly oversimplifying things, and ignoring a *lot* of issues
here. The reason for __future__ is that it can *break* things if new
features were just introduced. Take the with-statement, reachable in
python2.5 throug

   from __future__ import with_statement

It introduces a new keyword, which until then could be happily used as
variable name.

So you can't arbirtarily mix code that is written with one or the other
feature missing.

Then there is the issue of evolving C-APIs (or ABI), wich makes modules
incompatible between interpreters.

And frankly, for most of your list I don't see how you think your
"approach" reaches the stated advantages. Why is documentation becoming
easier? Why bug managing? Why testing?

I'm sorry, but this isn't thought out in any way, it's just wishful
thinking IMHO.

Diez


__future__ as you said helps in stop breaking things. what i suggest
that if both the libraries (in yr case-with is defined in one and
other with is not defined is a simple one) like py2.x and py3.x exists
and i want to use 3.x features in the morning then in the evening i
want to use 2.x or something like that just add on the library and i
get the require functionality


This doesn't make sense to me. What are you doing in the morning, and 
what in the evening, and to what extend is the code the same or are you 
talking about different pieces of code?


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
On Nov 12, 4:35 pm, "Diez B. Roggisch"  wrote:
> Sriram Srinivasan schrieb:
>
>
>
> > On Nov 12, 3:56 pm, "Diez B. Roggisch"  wrote:
> >> Sriram Srinivasan schrieb:
>
> >>> I guess why every programming language has some kind of a 'standard
> >>> library' built in within it.
> >>> In my view it must not be called as a 'library' at all. what it does
> >>> is like a 'bunch of built-in programs ready-made to do stuff'.
> >>> Lets see what a 'library' does:
> >>> 1. offers books for customers
> >>>  1.1 lets user select a book by genre, etc
> >>>  1.2 lets user to use different books of same genre, etc
> >>>  1.3 lets user to use books by same author, etc for different genre
> >>> 2. keeps track of all the books + their genre
> >>>  2.1 first knows what all books it has at present
> >>>  2.2 when new book comes it is added to the particular shelf sorted by
> >>> genre,author,edition, etc.
> >>>  2.3 when books become old they are kept separately for future
> >>> reference
> >>>  2.4 very old books can be sent to a museum/discarded
> >>> I guess no standard library does the minimum of this but wants to be
> >>> called a library.
> >>> As a python user I always wanted the standard library to have such
> >>> features so the user/developer decides to use what set of libraries he
> >>> want.
> >>> consider the libraries for 2.5 ,2.6, 3K are all available to the user,
> >>> the user selects what he wants with something like.
> >>> use library 2.5 or use library 2.6 etc.
> >>> The 2 main things that the library management interface has to do is
> >>> intra library management and inter library management.
> >>> intra library mgmt- consider books to be different libraries
> >>> (standard, commercial, private, hobby, etc)
> >>> inter library mgmt- consider books to be modules inside a library
> >>> ( standard, commercial, private, hobby, etc)
> >>> if somehow we could accomplish this kind of mother of a all plugin/ad-
> >>> hoc system that is a real breakthrough.
> >>> Advantages:
> >>> 1. new modules can be added to the stream quickly
> >>> 2. let the user select what he want to do
> >>> 3. modules (that interdepend on each other) can be packed into small
> >>> distribution and added to the stream quickly without waiting for new
> >>> releases
> >>> 4. solution to problems like py 2.x and 3.x
> >>> 5. users can be up to date
> >>> 6. documentation becomes easy + elaborate to users
> >>> 7. bug managing is easy too
> >>> 8. more feed back
> >>> 9. testing also becomes easy
> >>> 10. many more , i don't know.. you have to find.
> >>> Python already has some thing like that __future__ stuff. but my
> >>> question is how many people know that? and how many use that? most of
> >>> them wait until old crust gets totally removed. that is bad for user
> >>> and python. that is why problems like py2.x py3.x originate. If there
> >>> is a newer book collection it must always be available at the library.
> >>> i must not go to another library to get that book.
> >> You are greatly oversimplifying things, and ignoring a *lot* of issues
> >> here. The reason for __future__ is that it can *break* things if new
> >> features were just introduced. Take the with-statement, reachable in
> >> python2.5 throug
>
> >>    from __future__ import with_statement
>
> >> It introduces a new keyword, which until then could be happily used as
> >> variable name.
>
> >> So you can't arbirtarily mix code that is written with one or the other
> >> feature missing.
>
> >> Then there is the issue of evolving C-APIs (or ABI), wich makes modules
> >> incompatible between interpreters.
>
> >> And frankly, for most of your list I don't see how you think your
> >> "approach" reaches the stated advantages. Why is documentation becoming
> >> easier? Why bug managing? Why testing?
>
> >> I'm sorry, but this isn't thought out in any way, it's just wishful
> >> thinking IMHO.
>
> >> Diez
>
> > __future__ as you said helps in stop breaking things. what i suggest
> > that if both the libraries (in yr case-with is defined in one and
> > other with is not defined is a simple one) like py2.x and py3.x exists
> > and i want to use 3.x features in the morning then in the evening i
> > want to use 2.x or something like that just add on the library and i
> > get the require functionality
>
> This doesn't make sense to me. What are you doing in the morning, and
> what in the evening, and to what extend is the code the same or are you
> talking about different pieces of code?
>
> Diez


ok let me make it more clear..
forget how you use python now.. i am talking about __futuristic__
python programming.

there is no more python2.x or python3.x or python y.x releases. there
is only updates of python and standard library say 1.1.5 and 1.1.6.
let the difference be an old xml library updated with a new regex
support.

i am coding my program now.
i want my application to be compatible with 1.1.5 library

use library 1.1.5
import blah form blahblah
...
...

i cannot use regex feat

Re: query regarding file handling.

2009-11-12 Thread Himanshu
2009/11/12 ankita dutta :
> hi all,
>
> i have a file of 3x3 matrix of decimal numbers(tab separated). like this :
>
> 0.02    0.38    0.01
> 0.04    0.32    0.00
> 0.03    0.40    0.02
>
> now i want to read 1 row and get the sum of a particular row. but when i am
> trying with the following code, i am getting errors :
>
> code:
> "
> ln1=open("A.txt","r+")    # file "A.txt" contains my matrix
> lines1=ln1.readlines()
> n_1=[ ]
>
> for p1 in range (0,len(lines1)):
>     f1=lines1[p1]
>     n_1.append((f1) )
> print n_1
> print  sum(n_1[0])
>
> "
>
> output:
>
> ['0.0200\t0.3877\t0.0011\n', '0.0040\t0.3292\t0.0001\n',
> '0.0355\t0.4098\t0.0028\n', '0.0035\t0.3063\t0.0001\n',
> '0.0080\t0.3397\t0.0002\n']
>
> Traceback (most recent call last):
>   File "A_1.py", line 20, in 
>     print sum(nodes_1[0])
>   File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line
> 993, in sum
>     return _wrapit(a, 'sum', axis, dtype, out)
>   File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line
> 37, in _wrapit
>     result = getattr(asarray(obj),method)(*args, **kwds)
> TypeError: cannot perform reduce with flexible type
>
>
> what I think:
>
> as the list is in form of   '0.0200\t0.3877\t0.0011\n'    ,  n_1[0]  takes
> it as a whole string   which includes "\t" , i think thats why they are
> giving error.
>
> now how can i read only required numbers from this line
> '0.0200\t0.3877\t0.0011\n'  and find their sum ?
> can you kindly help me out how to properly code thing .
>

Yes you have it right. Split the string at spaces and convert the
numeric parts to floats before summing. Something along these lines :-

  1 ln1=open("A.txt","r+")# file "A.txt" contains my matrix
  2 lines1=ln1.readlines()
  3 n_1=[ ]
  4
  5 for p1 in range (0,len(lines1)):
  6 f1=lines1[p1]
  7 n_1.append((f1) )
  8 print n_1
  9 import re
 10 nos = []
 11 for s in re.split('\s+', n_1[0]):
 12 if s != '':
 13 nos.append(float(s))
 14 print nos
 15 print  sum(nos)

Better still use the csv module as suggested.

Thank You,
++imanshu
-- 
http://mail.python.org/mailman/listinfo/python-list


NetCDF to ascii file

2009-11-12 Thread Arnaud Vandecasteele
Hi all,

I would like to know if it's possible to read data from a netcdf file and
export it into an ASCII file.
I'm trying to get the latitude, longitude and a determinate value of a
netcdf file. But I don't know exactly how to do it.
I succeed to open and read a netcdf file but i don't know how to export the
data.
Here is my simple script :

import Scientific.IO.NetCDF as nc
from Numeric import *
import sys

try :
ncFile = nc.NetCDFFile("tos_O1_2001-2002.nc","r")
except :
print "can't open the file"
sys.exit(1)

try :
print "# Dimensions #"
print ncFile.dimensions.keys()

print "# Variables #"
print ncFile.variables.keys()
#return ['time_bnds', 'lat_bnds', 'lon', 'lon_bnds', 'time', 'lat',
'tos'

print "# Var Dim #"
tos = ncFile.variables["tos"]
print tos.dimensions
#return : ('time', 'lat', 'lon')


tosValue = tos.getValue()

except :
ncFile.close()



Do you know how I can get for each tos feature the lat, lon and time values.

Best regards

Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Error in Windmill

2009-11-12 Thread Himanshu
2009/11/12 Raji Seetharaman :
>
> Hi
>
> Im learning Web scraping with Python from here
> http://www.packtpub.com/article/web-scraping-with-python-part-2
>
> From the above link, the complete code is here http://pastebin.com/m10046dc6
>
> When i run the program in the terminal i receive following errors
>
> File "nasa.py", line 41, in 
>     test_scrape_iotd_gallery()
>   File "nasa.py", line 24, in test_scrape_iotd_gallery
>     client = WindmillTestClient(__name__)
>   File
> "/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/authoring/__init__.py",
> line 142, in __init__
>     method_proxy = windmill.tools.make_jsonrpc_client()
>   File
> "/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/tools/__init__.py",
> line 35, in make_jsonrpc_client
>     url = urlparse(windmill.settings['TEST_URL'])
> AttributeError: 'module' object has no attribute 'settings'
>
> Suggest me
>
> Thanks
> Raji. S
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>

Google or See 
http://groups.google.com/group/windmill-dev/browse_thread/thread/c921f7a25c0200c9
-- 
http://mail.python.org/mailman/listinfo/python-list


Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Santiago Romero

 Hi.

 I'm trying to port (just for fun), my old Sinclair Spectrum emulator,
ASpectrum, from C to Python + pygame.

 Although the Sinclair Spectrum has a simple Z80 8 bit 3.5Mhz
microprocessor, and no aditional hardware (excluding the +2/+3 model's
AYsound chip), I'm not sure if my loved scripted language, python,
will be fast enought to emulate the Sinclair Spectrum at 100% speed.
There are Java Spectrum emulators available, so it should be possible.

 Anyway, this message is directed to prepare the DESIGN so that the
code can run as fast as possible. I mean, I want to know the best way
to do some emulation tasks before starting to write a single line of
code.

 My questions:


 GLOBAL VARIABLES VS OBJECTS:
==

 I need the emulation to be as fastest as possible. In my C program I
have an struct called "Z80_Processor" that contains all the registers,
memory structures, and so on. I pass that struct to the Z80Decode(),
Z80Execute() or Z80Dissassemble() functions, i.e.. This allows me (in
my C emulator) to emulate multiple z80 processors If I want.

 As Python is scripted and I imagine that emulation will be slower
than the emulator written in C, I've thought of not creating a
Z80_Processor *object* and declare global variables such as reg_A,
reg_B, reg_PC, array main_memory[] and so on, and let the z80
functions to directly access that global variables.

 I'm doing this to avoid OOP's extra processing... but this makes the
program less modular. Do you think using processor "objects" would
make the execution slower, or I'm doing right using global variables
and avoiding objects in this type of program?

 Should I start writing all the code with a Z80CPU object and if
performance is low, just remove the "object" layer and declare it as
globals, or I should go directly for globals?



 HIGH AND LOW PART OF REGISTERS:
=

- In C, I have the following structs and code for emulating registers:

typedef union
{
  struct
  {
unsigned char l, h;
  } B;

  unsigned short W;
} eword;

eword reg_A;

 This means that reg_A is a 16 bit "variable" that I can directly
access with reg_A.w=value, and I can access also the LOW BYTE and HIGH
BYTES with reg_A.B.h and reg_A.B.l. And more importante, changing W
modifies l and h, and changing l or h modifies W.

 How can I implement this in Python, I mean, define a 16 byte variable
so that high and low bytes can be accessed separately and changing W,
H or L affects the entire variable? I would like to avoid doing BIT
masks to get or change HIGH or LOW parts of a variable and let the
compiled code to do it by itself.

 I know I can write an "object" with set and get methods that
implement that (that could be done in C too), but for emulation, I
need the FASTEST implementation possible (something like the C's Union
trick).


 MEMORY (ARRAYS):
===

 To emulate Spectrum's memory in C, I have a 64KB array like: unsigned
char memory_array[65535]. Then I can access memory_array[reg_PC] to
fetch the next opcode (or data for an opcode) and just increment
reg_PC.

 Is python's array module the best (and fastest) implementation to
"emulate" the memory?


 MEMORY (PAGES):
=

 The Sinclair Spectrum 8 bit computer can address 64KB of memory and
that memory is based on 16KB pages (so it can see 4 pages
simultaneously, where page 0 is always ROM). Programs can change
"pages" to point to aditional 16KB pages in 128KB memory models.

 I don't know how to emulate paging in python...

 My first approach would be to have eight 16KB arrays, and "memcpy()"
memory to the main 64KB array when the user calls page swapping. I
mean (C + pseudocode):


 main_memory[65536];
 memory_blocks[8][16384];

 // Initial settings
 current_pages[4] = [0, 1, 2, 3]

 // User swaps last memory page (3) to block 7, so I:
 page_to_swap_from = 3
 page_to_map = 7

 // Save the contents of current page (page being unmapped):
 memcpy( main_memory, // Source
 16384*page_to_swap_from, // Starting
at
 memory_blocks[current_pages[page_to_swap_from],  // To
 16384 ); // 16K

 // Now map page 7 to memory block 3:
 memcpy( memory_blocks[page_to_map],  // Source
 0,   // Starting
at
 main_memory[page_to_swap_from*16384],// To
 16384 ); // 16K
 current_pages[page_to_swap_from] = page_to_map;



 Memcpy is very fast in C, but I don't know if doing the same in
python with arrays would be fast enough, or if there is a better
approach to simulate paging of 16KB blocks in a 64KB memory windows (4
mappable memory blocks).

 Maybe another approach based in pointers or something like that?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Diez B. Roggisch


ok let me make it more clear..
forget how you use python now.. i am talking about __futuristic__
python programming.

>


there is no more python2.x or python3.x or python y.x releases. there
is only updates of python and standard library say 1.1.5 and 1.1.6.
let the difference be an old xml library updated with a new regex
support.

i am coding my program now.
i want my application to be compatible with 1.1.5 library

use library 1.1.5
import blah form blahblah
...
...

i cannot use regex feature of xml in this application
i then update my library in the evening
now both the libraries are present in my system.
now i try using the new feature

use library 1.1.6 #thats all now i get all features
import blah from blahblah


This is not futuristic, this is state of the art with PyPI & setuptools.

You still haven't addressed all the issues that arise though, regarding 
different python interpreter versions having different syntax and ABI.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: About one class/function per module

2009-11-12 Thread Hans-Peter Jansen
On Tuesday 03 November 2009, 12:52:20 Diez B. Roggisch wrote:
> Peng Yu wrote:
> > On Mon, Nov 2, 2009 at 9:39 PM, alex23  wrote:
> >> Peng Yu  wrote:
> >>> I don't think that this is a problem that can not be overcome. A
> >>> simple solution might be to associate a unique identifier to each
> >>> file, so that even the filename has been changed, the new version and
> >>> the old version can still be identified as actually the same file.
> >>
> >> Or, again, you could work _with_ the tools you're using in the way
> >> they're meant to be used, rather than re-inventing the whole process
> >> of version control yourself.
> >
> > I'm not trying to reinvent a new version control. But due to this
> > drawback, I avoid use a version control system. Rather, I compressed
> > my source code in a tar file whenever necessary. But if a version
> > control system has this capability, I'd love to use it. And I don't
> > think that no version control system support this is because of any
> > technical difficult but rather because of practical issue (maybe it
> > takes a lot efforts to add this feature to an existing version control
> > system?).
>
> There are those people who realize if *everything* they encounter needs
> adjustment to fit their needs, their needs need adjustment.
>
> Others fight an uphill battle & bicker about things not being as they
> want them.
>
> Don't get me wrong - innovation often comes from scratching ones personal
> itch. But you seem to be suffering from a rather bad case of
> neurodermatitis.

Diez, sorry for chiming in that lately, but while the whole thread is 
spilled over for no good reason, your QOTW remembered me on a quote of 
R.A.W., that sounds like a perfect fit:

Whatever the Thinker thinks, the Prover will prove. 

And if the Thinker thinks passionately enough, the Prover will prove the 
thought so conclusively that you will never talk a person out of such a 
belief, even if it is something as remarkable as the notion that there is a 
gaseous vertebrate of astronomical heft ("GOD") who will spend all eternity 
torturing people who do not believe in his religion. 

>From "Prometheus Rising" by Robert Anton Wilson

Pete

http://en.wikiquote.org/wiki/Robert_Anton_Wilson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Error in Windmill

2009-11-12 Thread Raji Seetharaman
On Thu, Nov 12, 2009 at 5:58 PM, Himanshu  wrote:

> 2009/11/12 Raji Seetharaman :
> >
> > Hi
> >
> > Im learning Web scraping with Python from here
> > http://www.packtpub.com/article/web-scraping-with-python-part-2
> >
> > From the above link, the complete code is here
> http://pastebin.com/m10046dc6
> >
> > When i run the program in the terminal i receive following errors
> >
> > File "nasa.py", line 41, in 
> > test_scrape_iotd_gallery()
> >   File "nasa.py", line 24, in test_scrape_iotd_gallery
> > client = WindmillTestClient(__name__)
> >   File
> >
> "/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/authoring/__init__.py",
> > line 142, in __init__
> > method_proxy = windmill.tools.make_jsonrpc_client()
> >   File
> >
> "/usr/local/lib/python2.6/dist-packages/windmill-1.3-py2.6.egg/windmill/tools/__init__.py",
> > line 35, in make_jsonrpc_client
> > url = urlparse(windmill.settings['TEST_URL'])
> > AttributeError: 'module' object has no attribute 'settings'
> >
> > Suggest me
> >
> > Thanks
> > Raji. S
> >
> > --
> > http://mail.python.org/mailman/listinfo/python-list
> >
> >
>
> Google or See
> http://groups.google.com/group/windmill-dev/browse_thread/thread/c921f7a25c0200c9
>

Thanks  for your help.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Steven D'Aprano
On Thu, 12 Nov 2009 00:31:57 -0800, Sriram Srinivasan wrote:

> I guess why every programming language has some kind of a 'standard
> library' built in within it.
> In my view it must not be called as a 'library' at all. what it does is
> like a 'bunch of built-in programs ready-made to do stuff'.
> 
> Lets see what a 'library' does:
> 
> 1. offers books for customers
[...]


You are describing a lending library, which is not the only sort of 
library. My personal library doesn't do any of those things. It is just a 
room with shelves filled with books.

Words in English can have more than one meaning. Horses run, 
refrigerators run, and even though they don't have either legs or motors, 
programs run too. The argument that code libraries don't behave like 
lending libraries won't get you anywhere.


> As a python user I always wanted the standard library to have such
> features so the user/developer decides to use what set of libraries he
> want.
> 
> consider the libraries for 2.5 ,2.6, 3K are all available to the user,
> the user selects what he wants with something like.
> 
> use library 2.5 or use library 2.6 etc.

Since library features are tied closely to the features available in the 
Python interpreter, the way to use library 2.5 is to use Python 2.5. You 
*might* be able to use library 2.5 with Python 2.6 or 2.4; you absolutely 
won't be able to use it with Python 1.5 or 3.1.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can a module know the module that imported it?

2009-11-12 Thread Steven D'Aprano
On Wed, 11 Nov 2009 13:44:06 -0800, Ethan Furman wrote:

> Well, I don't know what kj is trying to do, but my project is another
> (!) configuration program.  (Don't worry, I won't release it... unless
> somebody is interested, of course !)
> 
> So here's the idea so far:
> The configuration data is stored in a python module (call it
> settings.py).  In order to be able to do things like add settings to it,
> save the file after changes are made, etc., settings.py will import the
> configuration module, called configure.py.
> 
> A sample might look like this:
> 
> 
> import configure
> 
> paths = configure.Item()
> paths.tables = 'c:\\app\\data'
> paths.temp = 'c:\\temp'
> 
> 
> And in the main program I would have:
> 
> 
> import settings
> 
> main_table = dbf.Table('%s\\main' % paths.tables)
> 
> # user can modify path locations, and does, so update # we'll say it
> changes to \work\temp
> 
> settings.paths.temp = user_setting()
> settings.configure.save()
> 
> 
> And of course, at this point settings.py now looks like
> 
> 
> import configure
> 
> paths = configure.Item()
> paths.tables = 'c:\\app\\data'
> paths.temp = 'c:\\work\\temp'
> 


Self-modifying code?

UNCLEAN!!! UNCLEAN!!!


> Now, the tricky part is the line
> 
> settings.configure.save()
> 
> How will save know which module it's supposed to be re-writing?

In my opinion, the cleanest way is to tell it which module to re-write. 
Or better still, tell it which *file* to write to:

settings.configure.save(settings.__file__)

which is still self-modifying, but at least it doesn't need magic to 
modify itself.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
On Nov 12, 6:07 pm, "Diez B. Roggisch"  wrote:
> > ok let me make it more clear..
> > forget how you use python now.. i am talking about __futuristic__
> > python programming.
>
> > there is no more python2.x or python3.x or python y.x releases. there
> > is only updates of python and standard library say 1.1.5 and 1.1.6.
> > let the difference be an old xml library updated with a new regex
> > support.
>
> > i am coding my program now.
> > i want my application to be compatible with 1.1.5 library
>
> > use library 1.1.5
> > import blah form blahblah
> > ...
> > ...
>
> > i cannot use regex feature of xml in this application
> > i then update my library in the evening
> > now both the libraries are present in my system.
> > now i try using the new feature
>
> > use library 1.1.6 #thats all now i get all features
> > import blah from blahblah
>
> This is not futuristic, this is state of the art with PyPI & setuptools.
>
> You still haven't addressed all the issues that arise though, regarding
> different python interpreter versions having different syntax and ABI.
>
> Diez

haha... it would be awesome if they implement it in the 'future' .. i
posted the same to [email protected], it seems the distutil is
getting a big overhaul. (i hope they make a new uninstall option for
setuptools n easy_install) they say there are many ways to do that
using pkg tools... but python wants one and only one way- the python
way.
regarding issues:

1.security is always a issue
2. regarding problems like with statement you mentioned earlier.. i
think there will be a basic feature set that is common for all
versions of add-on libraries.
3.when you want the new feature you have to update your python
interpreter

use interpreter 1.5.2

may trigger the proper interpreter plugin(responsible for additional
feature) to load and add functionality.. its simple to say but it is
hard to make the compiler pluggable, may be they figure out.

use library x.y.z

while issuing this command the default interpreter has to
automatically resolve dependencies of the core c/cpp static libraries
and other shared libraries. so that must not be an issue. if they have
implemented this much, dep solving is nothing.

futuristic python may also contain an option for compiling a module
into a static library. so we can code python libraries in python
(mostly) itself. think of pypy. it is already state of the art.


-- 
http://mail.python.org/mailman/listinfo/python-list


Does turtle graphics have the wrong associations?

2009-11-12 Thread Alf P. Steinbach
One reaction to http://preview.tinyurl.com/ProgrammingBookP3> has 
been that turtle graphics may be off-putting to some readers because it is 
associated with children's learning.


What do you think?


Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread samwyse
On Nov 11, 3:57 am, "Robert P. J. Day"  wrote:
> http://groups.google.com/group/unladen-swallow/browse_thread/thread/4...
>
>   thoughts?

Google's already given us its thoughts:
http://developers.slashdot.org/story/09/11/11/0210212/Go-Googles-New-Open-Source-Programming-Language
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: New syntax for blocks

2009-11-12 Thread Steven D'Aprano
On Wed, 11 Nov 2009 03:52:45 -0800, Carl Banks wrote:

>> This is where a helper function is good. You want a dispatcher:
> 
> No I really don't.  I want to be able to see the action performed
> adjacent to the test, and not have to scroll up to down ten pages to
> find whatever function it dispatched to.


Then re-write the dispatcher to return a tuple (match_object, 
method_to_call) and then call them there at the spot.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


How can a module know the module that imported it?

2009-11-12 Thread kj


The subject line says it all.

Thanks!

kynn
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & Go

2009-11-12 Thread Michele Simionato
I forgot to post a link to a nice analysis of Go:
http://scienceblogs.com/goodmath/2009/11/googles_new_language_go.php
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Carl Banks
On Nov 12, 4:35 am, Santiago Romero  wrote:
>  Hi.
>
>  I'm trying to port (just for fun), my old Sinclair Spectrum emulator,
> ASpectrum, from C to Python + pygame.

The answer to your question is, "Use numpy".  More details below.



[snip]
>  Should I start writing all the code with a Z80CPU object and if
> performance is low, just remove the "object" layer and declare it as
> globals,

Yes, but I'd suggest to index a numpy array rather than structures.
See below.



[snip]
>  How can I implement this in Python, I mean, define a 16 byte variable
> so that high and low bytes can be accessed separately and changing W,
> H or L affects the entire variable? I would like to avoid doing BIT
> masks to get or change HIGH or LOW parts of a variable and let the
> compiled code to do it by itself.

You can do clever memory slicing like this with numpy.  For instance:

breg = numpy.zeros((16,),numpy.uint8)
wreg = numpy.ndarray((8,),numpy.uint16,breg)

This causes breg and wreg to share the same 16 bytes of memory.  You
can define constants to access specific registers:

R1L = 1
R1H = 2
R1 = 1

breg[R1H] = 2
print wreg[R1]


[snip]
>  Is python's array module the best (and fastest) implementation to
> "emulate" the memory?

I'd use numpy for this as well.  (I doubt the Z80 had a 16-bit bus,
but if it did you could use the same memory sharing trick I showed you
with the registers to simulate word reads and writes.)

Note that, when performing operations on single values, neither numpy
nor array module are necessarily a lot faster than Python lists, might
even be slower.  But they use a LOT less memory, which is important
for largish arrays.


[snip]
>  The Sinclair Spectrum 8 bit computer can address 64KB of memory and
> that memory is based on 16KB pages (so it can see 4 pages
> simultaneously, where page 0 is always ROM). Programs can change
> "pages" to point to aditional 16KB pages in 128KB memory models.
>
>  I don't know how to emulate paging in python...

numpy again.  This would mean you'd have to fiddle with addresses a
bit, but that shouldn't be too big a deal.  Create the physical
memory:

mem = numpy.zeros((128*1024,),numpy.uint8)

Then create the pages.  (This is a regular Python list containing
numpy slices. numpy slices share memory so there is no copying of
underlying data.)

page = [mem[0:16*1024],
mem[16*1024:32*1024],
mem[32*1024:48*1024],
mem[48*1024:64*1024]]

To access the byte at address 42432, you'd have use bit operations to
get a page number and index (2 and 9664 in this case), then you can
access the memory like this:

page[2][9664] = 33
p = page[3][99]

To swap a page, reassign the slice of main memory,

page[2] = mem[96*1024:112*1024]

Now, accessing address 42432 will access a byte from a different page.

If you don't want to fiddle with indirect pages and would just rather
copy memory around when a page swap occurs, you can do that, too.
(Assigning to a slice copies the data rather than shares.)  I don't
know if it's as fast as memset but it should be pretty quick.

.

Hope these brief suggestions help.  If you don't want third party
libraries, then numpy will be of no use.  But I guess if you're using
pygame third party modules are ok.  So get numpy, it'll make things a
lot easier.  It can be a bit daunting to learn, though.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan

> You are describing a lending library, which is not the only sort of
> library. My personal library doesn't do any of those things. It is just a
> room with shelves filled with books.

how i see is all libraries are libraries, for a personal library you
are the only customer and you are the management too.!

> Words in English can have more than one meaning. Horses run,
> refrigerators run, and even though they don't have either legs or motors,
> programs run too. The argument that code libraries don't behave like
> lending libraries won't get you anywhere.

first this is not an argument at all...i clearly stated these were my
imaginations cum ideas.

> Since library features are tied closely to the features available in the
> Python interpreter, the way to use library 2.5 is to use Python 2.5. You
> *might* be able to use library 2.5 with Python 2.6 or 2.4; you absolutely
> won't be able to use it with Python 1.5 or 3.1.

i am not talking about the past..past were experiences! that the
developers had and what is happening today *might be* another
experience for the future.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Santiago Romero
> >  I'm trying to port (just for fun), my old Sinclair Spectrum emulator,
> > ASpectrum, from C to Python + pygame.
>
> The answer to your question is, "Use numpy".  More details below.

 Let's see :-)

> >  How can I implement this in Python, I mean, define a 16 byte variable
> > so that high and low bytes can be accessed separately and changing W,
> > H or L affects the entire variable? I would like to avoid doing BIT
> > masks to get or change HIGH or LOW parts of a variable and let the
> > compiled code to do it by itself.
>
> You can do clever memory slicing like this with numpy.  For instance:
>
> breg = numpy.zeros((16,),numpy.uint8)
> wreg = numpy.ndarray((8,),numpy.uint16,breg)
>
> This causes breg and wreg to share the same 16 bytes of memory.  You
> can define constants to access specific registers:
>
> R1L = 1
> R1H = 2
> R1 = 1
>
> breg[R1H] = 2
> print wreg[R1]

 And how about speed?

Assuming a 16 bit register named BC which contains 2 8 bit regiters (B
and C)...

 Will the above be faster than shifts and bit operations (<<, and,
>> ) with new B and C values to "recalculate" BC when reading or
changing either B, C or BC?

> >  Is python's array module the best (and fastest) implementation to
> > "emulate" the memory?
>
> I'd use numpy for this as well.  (I doubt the Z80 had a 16-bit bus,
> but if it did you could use the same memory sharing trick I showed you
> with the registers to simulate word reads and writes.)

 Z80 has a 16 bit ADDRESS bus, 8 bit DATA bus. This means you can
address from 0 to 65535 memory cells of 8 bytes. Z80 has 16 bit bus
operations, but currently in C I write 16 bit values as two 8 bit
consecutive values without using (unsigned short *) pointers. But it
seems that numpy would allow me to do it better than C in this case...

> Note that, when performing operations on single values, neither numpy
> nor array module are necessarily a lot faster than Python lists, might
> even be slower.  But they use a LOT less memory, which is important
> for largish arrays.

 Well, we're talking about a 128KB 1-byte array, that's the maximum
memory size a Sinclair Spectrum can have, and always by using page
swapping of 16KB blocks in a 64KB addressable space...

 If you really think that python lists can be faster that numpy or
array modules, let me know.

 Maybe I'll make a "benchmark test", by making some millions of read /
write operations and timing the results.

 I wan't to write the emulator as "pythonic" as possible...

> >  I don't know how to emulate paging in python...
>
> numpy again.  This would mean you'd have to fiddle with addresses a
> bit, but that shouldn't be too big a deal.  Create the physical
> memory:
>
> mem = numpy.zeros((128*1024,),numpy.uint8)

 A 128K array of zeroes...

> Then create the pages.  (This is a regular Python list containing
> numpy slices. numpy slices share memory so there is no copying of
> underlying data.)
>
> page = [mem[0:16*1024],
>         mem[16*1024:32*1024],
>         mem[32*1024:48*1024],
>         mem[48*1024:64*1024]]

 Those are just like pointers to the "mem" numpy array, pointing to
concrete start indexes, aren't they?

> To access the byte at address 42432, you'd have use bit operations to
> get a page number and index (2 and 9664 in this case), then you can
> access the memory like this:

 Do you mean:

page = address / 16384
index = address MOD 16384

 ?

 Or, better, with:

  page = address >> 14
  index = address & 16383

 ?


> page[2][9664] = 33
> p = page[3][99]
>
> To swap a page, reassign the slice of main memory,
>
> page[2] = mem[96*1024:112*1024]
>
> Now, accessing address 42432 will access a byte from a different page.

 But the above calculations (page, index) wouldn't be slower than a
simple play 64KB numpy array (and make no paging mechanism when
reading or writing) and copy memory slices when page swappings are
requested?

> If you don't want to fiddle with indirect pages and would just rather
> copy memory around when a page swap occurs, you can do that, too.
> (Assigning to a slice copies the data rather than shares.)  I don't
> know if it's as fast as memset but it should be pretty quick.

 That's what I was talking about.
 With "page and index" I'm slowing down EACH memory read and write,
and that includes opcode and data fetching...

 With "memory copying in page swaps", memory is always read and write
quickly, and if "slice copies" are fast enough, the emulation will be
>100% of speed (I hope) for a 3.5Mhz system ...

> Hope these brief suggestions help.  If you don't want third party
> libraries, then numpy will be of no use.  But I guess if you're using
> pygame third party modules are ok.  So get numpy, it'll make things a
> lot easier.  It can be a bit daunting to learn, though.

 Yes, you've been very helpful!

 How about numpy portability? Is array more portable?

 And finally, do you think I'm doing right by using global variables
for registers, memory, and so, or should I put ALL into

Re: Python & Go

2009-11-12 Thread kj
In <[email protected]> Carl 
Banks  writes:

>...but the lack of
>inheritance is a doozie.

That's what I like most about it.  Inheritance introduces at least
as many headaches as it solves.  For one, it leads to spaghetti
code.  When reading such code, to follow any method call one must
check in at least two places: the base class and the subclass.
The deeper the inheritance chain, the more places one must check.
For every method call.  Yeecch.  Good riddance.

kj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C api and checking for integers

2009-11-12 Thread casevh
On Nov 12, 1:28 am, "lallous"  wrote:
> Hello,
>
> I am a little confused on how to check if a python variable is an integer or
> not.
>
> Sometimes PyInt_Check() fails and PyLong_Check() succeeds.

I assume you are using Python 2.x. There are two integer types: (1)
PyInt which stores small values that can be stored in a single C long
and (2) PyLong which stores values that may or may not fit in a single
C long. The number 2 could arrive as either a PyInt or a PyLong.

Try something like the following:

if PyInt_CheckExact()
  myvar = PyInt_AS_LONG()
else if PyLong_CheckExact()
  myvar = PyLong_As_Long()
  if ((myvar == -1) && (PyErr_Occurred())
  # Too big to fit in a C long

Python 3.x is a little easier since everything is a PyLong.

>
> How to properly check for integer values?
>
> OTOH, I tried PyNumber_Check() and:
>
> (1) The doc says: Returns 1 if the object o provides numeric protocols, and
> false otherwise. This function always succeeds.
>
> What do they mean: "always succeeds" ?

That it will always return true or false; it won't raise an error.

>
> (2) It seems PyNumber_check(py_val) returns true when passed an instance!
>
> Please advise.
>
> --
> Elias

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread mcherm
On Nov 11, 7:38 pm, Vincent Manis  wrote:
> 1. The statement `Python is slow' doesn't make any sense to me.
> Python is a programming language; it is implementations that have
> speed or lack thereof.
   [...]
> 2. A skilled programmer could build an implementation that compiled
> Python code into Common Lisp or Scheme code, and then used a
> high-performance Common Lisp compiler...

I think you have a fundamental misunderstanding of the reasons why
Python is
slow. Most of the slowness does NOT come from poor implementations:
the CPython
implementation is extremely well-optimized; the Jython and Iron Python
implementations use best-in-the-world JIT runtimes. Most of the speed
issues
come from fundamental features of the LANGUAGE itself, mostly ways in
which
it is highly dynamic.

In Python, a piece of code like this:
len(x)
needs to watch out for the following:
* Perhaps x is a list OR
  * Perhaps x is a dict OR
  * Perhaps x is a user-defined type that declares a __len__
method OR
  * Perhaps a superclass of x declares __len__ OR
* Perhaps we are running the built-in len() function OR
  * Perhaps there is a global variable 'len' which shadows the
built-in OR
  * Perhaps there is a local variable 'len' which shadows the
built-in OR
  * Perhaps someone has modified __builtins__

In Python it is possible for other code, outside your module to go in
and
modify or replace some methods from your module (a feature called
"monkey-patching" which is SOMETIMES useful for certain kinds of
testing).
There are just so many things that can be dynamic (even if 99% of the
time
they are NOT dynamic) that there is very little that the compiler can
assume.

So whether you implement it in C, compile to CLR bytecode, or
translate into
Lisp, the computer is still going to have to to a whole bunch of
lookups to
make certain that there isn't some monkey business going on, rather
than
simply reading a single memory location that contains the length of
the list.
Brett Cannon's thesis is an example: he attempted desperate measures
to
perform some inferences that would allow performing these
optimizations
safely and, although a few of them could work in special cases, most
of the
hoped-for improvements were impossible because of the dynamic nature
of the
language.

I have seen a number of attempts to address this, either by placing
some
restrictions on the dynamic nature of the code (but that would change
the
nature of the Python language) or by having some sort of a JIT
optimize the
common path where we don't monkey around. Unladen Swallow and PyPy are
two
such efforts that I find particularly promising.

But it isn't NEARLY as simple as you make it out to be.

-- Michael Chermside
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can't Write File

2009-11-12 Thread Victor Subervi
On Wed, Nov 11, 2009 at 6:35 PM, Rhodri James
wrote:

> On Wed, 11 Nov 2009 14:00:44 -, Victor Subervi <
> [email protected]> wrote:
>
>  6) you don't indicate which user is executing this script (only root can
>>> write to it)
>>>
>>>  Help me on this. All scripts are owned by root. Is it not root that is
>> executing the script?
>>
>
> Not unless your server setup is very, very stupid.  CGI scripts normally
> run as a user with *very* limited permissions to limit (amongst other
> things) the damage ineptly written scripts can do.
>

Thanks. I'll change it.
V

>
> --
> Rhodri James *-* Wildebeest Herder to the Masses
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why Error is derived from EnvironmentError in shutil.py?

2009-11-12 Thread Peng Yu
On Thu, Nov 12, 2009 at 12:00 AM, alex23  wrote:
> On Nov 12, 2:46 pm, Peng Yu  wrote:
>> I see Error is derived from EnvironmentError in shutil.py.
>>
>> class Error(EnvironmentError):
>>     pass
>>
>> I'm wondering why EnvironmentError can not be raised directly. Why
>> Error is raised instead?
>
> This way you can explicitly trap on shutil.Error and not intercept any
> other EnvironmentError that might be raised.
>
> And as it's a descendent of EnvironmentError, it can still be caught
> by any handlers looking for such exceptions.

It is true that an exception class is usually defined in a module and
all the exceptions raised in this module should be of this derived
class? Is it the general practice in python?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CGI vs mod_python

2009-11-12 Thread Victor Subervi
On Wed, Nov 11, 2009 at 3:32 PM, Dave Angel  wrote:

>
>
> Victor Subervi wrote:
>
>> 
>>
>> The problem was not CGI. It turned out to be line-endings being mangled by
>> Windoze and __invisible __ in my unix editor. Lovely.
>> Thanks anyway,
>> V
>>
>>
>>
> That's twice you've blamed Windows for the line-ending problem.  Windows
> didn't create those crlf endings, your text editor did.  If your editor
> can't control that, you could try a different one.  Komodo for example can
> do it either way, or it can preserve whatever is being used in the loaded
> file.  Similarly metapad, in spite of its huge simplicity, lets you decide,
> and can convert an existing file in either direction.
>
> And I'm a great believer in visible control characters.  I configure Komodo
> to show me spaces in the lines, so I can see whether it's a tab or a space.
>  It can also be configured to show end-of-line characters, so I presume
> that'd work here.  See whether your Unix editor can show you this sort of
> thing.
>
> Finally, many FTP programs can be told to automatically convert
> line-endings when transferring text files.  There's probably some risk that
> it'll mangle a non-text file, but it's worth considering.
>

Wonderful, wonderful. I'll take both of those pieces of advice. Thank you.
V
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why Error is derived from EnvironmentError in shutil.py?

2009-11-12 Thread Chris Rebert
On Thu, Nov 12, 2009 at 7:10 AM, Peng Yu  wrote:
> On Thu, Nov 12, 2009 at 12:00 AM, alex23  wrote:
>> On Nov 12, 2:46 pm, Peng Yu  wrote:
>>> I see Error is derived from EnvironmentError in shutil.py.
>>>
>>> class Error(EnvironmentError):
>>>     pass
>>>
>>> I'm wondering why EnvironmentError can not be raised directly. Why
>>> Error is raised instead?
>>
>> This way you can explicitly trap on shutil.Error and not intercept any
>> other EnvironmentError that might be raised.
>>
>> And as it's a descendent of EnvironmentError, it can still be caught
>> by any handlers looking for such exceptions.
>
> It is true that an exception class is usually defined in a module and
> all the exceptions raised in this module should be of this derived
> class? Is it the general practice in python?

It is common, but not universal. Several of the modules in the stdlib
use the technique.

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


(OT) Recommend FTP Client

2009-11-12 Thread Victor Subervi
Hi;
Someone on this list just recommended I find an ftp client that enables me
to change line endings. He indicated that it would be easy, but googling I
haven't been able to find one. I would prefer a free client, but whatever.
Please send me a recommendation.
TIA,
Victor
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Diez B. Roggisch

Sriram Srinivasan schrieb:

On Nov 12, 6:07 pm, "Diez B. Roggisch"  wrote:

ok let me make it more clear..
forget how you use python now.. i am talking about __futuristic__
python programming.
there is no more python2.x or python3.x or python y.x releases. there
is only updates of python and standard library say 1.1.5 and 1.1.6.
let the difference be an old xml library updated with a new regex
support.
i am coding my program now.
i want my application to be compatible with 1.1.5 library
use library 1.1.5
import blah form blahblah
...
...
i cannot use regex feature of xml in this application
i then update my library in the evening
now both the libraries are present in my system.
now i try using the new feature
use library 1.1.6 #thats all now i get all features
import blah from blahblah

This is not futuristic, this is state of the art with PyPI & setuptools.

You still haven't addressed all the issues that arise though, regarding
different python interpreter versions having different syntax and ABI.

Diez


haha... it would be awesome if they implement it in the 'future' .. i
posted the same to [email protected], it seems the distutil is
getting a big overhaul. (i hope they make a new uninstall option for
setuptools n easy_install) they say there are many ways to do that
using pkg tools... but python wants one and only one way- the python
way.
regarding issues:

1.security is always a issue
2. regarding problems like with statement you mentioned earlier.. i
think there will be a basic feature set that is common for all
versions of add-on libraries.


So all libraries written have to use the common subset, which - unless 
things are *removed*, which with python3 actually happened - is always 
the oldest interpreter. And if a feature goes away, they have to be 
rewritten with the then common subset.


In other words: as a library writer, I can't use shiny, new & better 
features, I'm stuck with the old stuff, and whenever the intepreter 
evolves I have to rewrite even the old ones. Not appealing. Why develop 
the interpreter at all then?



3.when you want the new feature you have to update your python
interpreter

use interpreter 1.5.2

may trigger the proper interpreter plugin(responsible for additional
feature) to load and add functionality.. its simple to say but it is
hard to make the compiler pluggable, may be they figure out.

>


use library x.y.z

while issuing this command the default interpreter has to
automatically resolve dependencies of the core c/cpp static libraries
and other shared libraries. so that must not be an issue. if they have
implemented this much, dep solving is nothing.


In other words: the problem is solved by somehow solving the problem - 
but not by a concrete solution you propose?




futuristic python may also contain an option for compiling a module
into a static library. so we can code python libraries in python
(mostly) itself. think of pypy. it is already state of the art.
PyPy is cool, but by far not advanced enough to replace external, 
C-based libraries such as NUMPY and PyQt and whatnot.


I don't say that the current situation is by any means ideal. There is a 
lot to be said about package creation, distribution, installation, 
removal, dependency handling, and even system-packaging-integration if 
one likes.


You IMHO just add another layer of complexity on top of it, without 
proposing solutions to any of the layers in between, nor your new one - 
namely, the interpreter version agnosticism.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Joel Davis
On Nov 12, 10:07 am, mcherm  wrote:
> On Nov 11, 7:38 pm, Vincent Manis  wrote:
>
> > 1. The statement `Python is slow' doesn't make any sense to me.
> > Python is a programming language; it is implementations that have
> > speed or lack thereof.
>    [...]
> > 2. A skilled programmer could build an implementation that compiled
> > Python code into Common Lisp or Scheme code, and then used a
> > high-performance Common Lisp compiler...
>
> I think you have a fundamental misunderstanding of the reasons why
> Python is
> slow. Most of the slowness does NOT come from poor implementations:
> the CPython
> implementation is extremely well-optimized; the Jython and Iron Python
> implementations use best-in-the-world JIT runtimes. Most of the speed
> issues
> come from fundamental features of the LANGUAGE itself, mostly ways in
> which
> it is highly dynamic.
>
> In Python, a piece of code like this:
>     len(x)
> needs to watch out for the following:
>     * Perhaps x is a list OR
>       * Perhaps x is a dict OR
>       * Perhaps x is a user-defined type that declares a __len__
> method OR
>       * Perhaps a superclass of x declares __len__ OR
>     * Perhaps we are running the built-in len() function OR
>       * Perhaps there is a global variable 'len' which shadows the
> built-in OR
>       * Perhaps there is a local variable 'len' which shadows the
> built-in OR
>       * Perhaps someone has modified __builtins__
>
> In Python it is possible for other code, outside your module to go in
> and
> modify or replace some methods from your module (a feature called
> "monkey-patching" which is SOMETIMES useful for certain kinds of
> testing).
> There are just so many things that can be dynamic (even if 99% of the
> time
> they are NOT dynamic) that there is very little that the compiler can
> assume.
>
> So whether you implement it in C, compile to CLR bytecode, or
> translate into
> Lisp, the computer is still going to have to to a whole bunch of
> lookups to
> make certain that there isn't some monkey business going on, rather
> than
> simply reading a single memory location that contains the length of
> the list.
> Brett Cannon's thesis is an example: he attempted desperate measures
> to
> perform some inferences that would allow performing these
> optimizations
> safely and, although a few of them could work in special cases, most
> of the
> hoped-for improvements were impossible because of the dynamic nature
> of the
> language.
>
> I have seen a number of attempts to address this, either by placing
> some
> restrictions on the dynamic nature of the code (but that would change
> the
> nature of the Python language) or by having some sort of a JIT
> optimize the
> common path where we don't monkey around. Unladen Swallow and PyPy are
> two
> such efforts that I find particularly promising.
>
> But it isn't NEARLY as simple as you make it out to be.
>
> -- Michael Chermside

obviously the GIL is a major reason it's so slow. That's one of the
_stated_ reasons why Google has decided to forgo CPython code. As far
as how sweeping the directive is, I don't know, since the situation
would sort of resolve itself if one committed to Jython application
building or just wait until unladen swallow is finished.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Dave Angel



Santiago Romero wrote:

 I'm trying to port (just for fun), my old Sinclair Spectrum emulator,
A



 Do you mean:

page =ddress / 16384
index =ddress MOD 16384

 ?

 Or, better, with:

  page =ddress >> 14
  index =ddress & 16383

 ?

  

How about
   page, index = divmod(address, 16384)



--
http://mail.python.org/mailman/listinfo/python-list


#define (from C) in Python

2009-11-12 Thread Santiago Romero

Is there a Python version of C's language #define statements?

Example:

#define ReadMem( (x) )memory[ (x) ]

 Instead of using a function, when you call to ReadMem(), the code is
INCLUDED, (no function is called, the "compiler" just substitues the
ReadMem( expression ) with memory[ (expression) ] .

 I want to avoid function calls to speed up a program by sacrifizing
the resulting size ...

 Is that possible?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread Diez B. Roggisch

Santiago Romero schrieb:

Is there a Python version of C's language #define statements?

Example:

#define ReadMem( (x) )memory[ (x) ]

 Instead of using a function, when you call to ReadMem(), the code is
INCLUDED, (no function is called, the "compiler" just substitues the
ReadMem( expression ) with memory[ (expression) ] .

 I want to avoid function calls to speed up a program by sacrifizing
the resulting size ...

 Is that possible?


Not without taking the extra step of a ... preprocessor. As C does.

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread Michele Simionato
On Nov 12, 5:43 pm, Santiago Romero  wrote:
> Is there a Python version of C's language #define statements?
>
> Example:
>
> #define ReadMem( (x) )    memory[ (x) ]
>
>  Instead of using a function, when you call to ReadMem(), the code is
> INCLUDED, (no function is called, the "compiler" just substitues the
> ReadMem( expression ) with memory[ (expression) ] .
>
>  I want to avoid function calls to speed up a program by sacrifizing
> the resulting size ...
>
>  Is that possible?

Python is a slow language and people usually do not even think of such
micro-optimizations.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Sriram Srinivasan
> So all libraries written have to use the common subset, which - unless
> things are *removed*, which with python3 actually happened - is always
> the oldest interpreter. And if a feature goes away, they have to be
> rewritten with the then common subset.

you see that's the problem with py3. instead of slowly removing the
features one by one till the end they made a jump. the problem is not
with the jump but with other libraries/packages that depend on py2.
what i felt was if the new version was available as soon as it is into
the stream, developers/users try to use them and get accustomed. now
they follow this. they have frozen py3 for may be 2-3 years so ppl
would move slowly to py2.6 ,2.7, 2.8... most. also the corporate
companies are the most influential.

also a point to note: if new updates keep on coming (for eg. daily
antivirus update) developers tend to accept and move on *easily*.

i am not an expert in programming/interpreter design/library writing/
packaging. i just wanted the standard library *too* to be pluggable,
not just the applications based on it.
i am not even experienced in python to make bold accusations so all i
can say is 'how it would be if?' and i always stick to one thing 'why
not?'.

> In other words: as a library writer, I can't use shiny, new & better
> features, I'm stuck with the old stuff, and whenever the interpreter
> evolves I have to rewrite even the old ones. Not appealing. Why develop
> the interpreter at all then?

if you are app developer you needn't rewrite anything. that is the
main point here! just specifying the interpreter and library version
is enough every thing has to work like magic. as the previous
library&interpreter wont be removed (unless and until it is your will)
u can use them. thats how everybody would like it to be.

as for a library writer is concerned: as you say it depends on certain
things like other libraries,interpreter,etc. but i would like to say
that change is inevitable. if some core developer has to change
something he might change. and if you have to update your library you
have to. this applies to the interpreter too. what python now does is
make every update work with the minimal set (giving minimal backward
compatibility) plus the new features too. which is exactly the right
thing. what i wanted is the users have to be aware now and then when
of the modification, deletion, addition stuff not in every release as
in py3k. then the whole backward incompatibility issue would be
*dissolved* or what i mean is *diffused* and ppl dont even know that
they have switched over to a very new thing.



> In other words: the problem is solved by somehow solving the problem -
> but not by a concrete solution you propose?

as i told before i neither know the problem nor the solution. i just
wrote my ideas/imagination down

> PyPy is cool, but by far not advanced enough to replace external,
> C-based libraries such as NUMPY and PyQt and whatnot.
>
> I don't say that the current situation is by any means ideal. There is a
> lot to be said about package creation, distribution, installation,
> removal, dependency handling, and even system-packaging-integration if
> one likes.
>
> You IMHO just add another layer of complexity on top of it, without
> proposing solutions to any of the layers in between, nor your new one -
> namely, the interpreter version agnosticism.

yes you are right. python has a lot of bindings to a lot of stuff. but
it is the strength and weakness. not exactly pythons weakness but with
the thing on the other side. yes it may be looking complex because
most of the 'standard library' system was not designed to be adhoc/add-
on/pluggable from the start.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Steven D'Aprano
On Thu, 12 Nov 2009 08:35:23 -0800, Joel Davis wrote:

> obviously the GIL is a major reason it's so slow. 

No such "obviously" about it.

There have been attempts to remove the GIL, and they lead to CPython 
becoming *slower*, not faster, for the still common case of single-core 
processors.

And neither Jython nor IronPython have the GIL. Jython appears to scale 
slightly better than CPython, but for small data sets, is slower than 
CPython. IronPython varies greatly in performance, ranging from nearly 
twice as fast as CPython on some benchmarks to up to 6000 times slower!

http://www.smallshire.org.uk/sufficientlysmall/2009/05/22/ironpython-2-0-and-jython-2-5-performance-compared-to-python-2-5/

http://ironpython-urls.blogspot.com/2009/05/python-jython-and-ironpython.html


Blaming CPython's supposed slowness on the GIL is superficially plausible 
but doesn't stand up to scrutiny. The speed of an implementation depends 
on many factors, and it also depends on *what you measure* -- it is sheer 
nonsense to talk about "the" speed of an implementation. Different tasks 
run at different speeds, and there is no universal benchmark.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread Stefan Behnel
Santiago Romero, 12.11.2009 17:43:
> Is there a Python version of C's language #define statements?
> 
> Example:
> 
> #define ReadMem( (x) )memory[ (x) ]

Yes:

ReadMem = memory.__getitem__

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ask a question about the module

2009-11-12 Thread Daniel Fetchinson
> Could not import module "Gnuplot" - it is not installed on your
> system. You need to install the Gnuplot.py package.
> \easyviz\gnuplot_.py(41) :
> Traceback (most recent call last):
>   File "E:\study\python\commodity modle 10.23.py", line 3, in 
> import multipleloop as mp
>   File "E:\study\python\multipleloop.py", line 81, in 
> from scitools.misc import str2obj
>   File "C:\Python26\lib\site-packages\scitools\__init__.py", line 67,
> in 
> import sys, std
>   File "C:\Python26\lib\site-packages\scitools\std.py", line 26, in
> 
> from scitools.easyviz import *
>   File "C:\Python26\lib\site-packages\scitools\easyviz\__init__.py",
> line 1819, in 
> exec(cmd)
>   File "", line 1, in 
>   File "C:\Python26\lib\site-packages\scitools\easyviz\gnuplot_.py",
> line 42, in 
> import Gnuplot
> ImportError: No module named Gnuplot
>
>
> the above is the hint from python window, I know Gnuplot is a software
> and it can help us to make graphs, however, after I download Gnuplot
> and run it, the problem still exist. Is there any expert can help me
> solve this problem?
> Thank you very much!


Did you also install the python bindings for gnuplot? If not, you can
get it from
http://gnuplot-py.sourceforge.net/

HTH,
Daniel
-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread Santiago Romero
On 12 nov, 18:16, Stefan Behnel  wrote:
> Santiago Romero, 12.11.2009 17:43:
>
> > Is there a Python version of C's language #define statements?
>
> > Example:
>
> > #define ReadMem( (x) )    memory[ (x) ]
>
> Yes:
>
>         ReadMem = memory.__getitem__
>
> Stefan


 Well, In the above concrete example, that would work, but I was
talking for multiple code lines, like:


#define LD_r_n(reg) (reg) = Z80ReadMem(r_PC++)

#define LD_rr_nn(reg)   r_opl = Z80ReadMem(r_PC); r_PC++; \
r_oph = Z80ReadMem(r_PC); r_PC++; \
reg = r_op

#define LOAD_r(dreg, saddreg)   (dreg)=Z80ReadMem((saddreg))

#define LOAD_rr_nn(dreg)   r_opl = Z80ReadMem(r_PC); r_PC++; \
   r_oph = Z80ReadMem(r_PC); r_PC++; \
   r_tmpl = Z80ReadMem(r_op); \
   r_tmph = Z80ReadMem((r_op)+1); \
   dreg=r_tmp

#define STORE_nn_rr(dreg) \
r_opl = Z80ReadMem(r_PC); r_PC++;\
r_oph = Z80ReadMem(r_PC); r_PC++; \
r_tmp = dreg; \
Z80WriteMem((r_op),r_tmpl, regs); \
Z80WriteMem((r_op+1),r_tmph, regs)


 But it seems that is not possible :-(
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Alf P. Steinbach

* Steven D'Aprano:

On Thu, 12 Nov 2009 08:35:23 -0800, Joel Davis wrote:

obviously the GIL is a major reason it's so slow. 



http://en.wikipedia.org/wiki/Global_Interpreter_Lock

Uh oh...



No such "obviously" about it.

There have been attempts to remove the GIL, and they lead to CPython 
becoming *slower*, not faster, for the still common case of single-core 
processors.


And neither Jython nor IronPython have the GIL. Jython appears to scale 
slightly better than CPython, but for small data sets, is slower than 
CPython. IronPython varies greatly in performance, ranging from nearly 
twice as fast as CPython on some benchmarks to up to 6000 times slower!


http://www.smallshire.org.uk/sufficientlysmall/2009/05/22/ironpython-2-0-and-jython-2-5-performance-compared-to-python-2-5/

http://ironpython-urls.blogspot.com/2009/05/python-jython-and-ironpython.html


Blaming CPython's supposed slowness


Hm, this seems religious.

Of course Python is slow: if you want speed, pay for it by complexity.

It so happens that I think CPython could have been significantly faster, but (1) 
doing that would amount to creating a new implementation, say, C++Python , 
and (2) what for, really?, since CPU-intensive things should be offloaded to 
other language code anyway.



on the GIL is superficially plausible 
but doesn't stand up to scrutiny. The speed of an implementation depends 
on many factors, and it also depends on *what you measure* -- it is sheer 
nonsense to talk about "the" speed of an implementation. Different tasks 
run at different speeds, and there is no universal benchmark.


This also seems religious. It's like in Norway it became illegal to market lemon 
soda, since umpteen years ago it's soda with lemon flavoring. This has to do 
with the *origin* of the citric acid, whether natural or chemist's concoction, 
no matter that it's the same chemical. So, some people think that it's wrong to 
talk about interpreted languages, hey, it should be a "language designed for 
interpretation", or better yet, "dynamic language", or bestest, "language with 
dynamic flavor". And slow language, oh no, should be "language whose current 
implementations are perceived as somewhat slow by some (well, all) people", but 
of course, that's just silly.



Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread J Kenneth King
mcherm  writes:

> On Nov 11, 7:38 pm, Vincent Manis  wrote:
>> 1. The statement `Python is slow' doesn't make any sense to me.
>> Python is a programming language; it is implementations that have
>> speed or lack thereof.
>[...]
>> 2. A skilled programmer could build an implementation that compiled
>> Python code into Common Lisp or Scheme code, and then used a
>> high-performance Common Lisp compiler...
>
> I think you have a fundamental misunderstanding of the reasons why
> Python is
> slow. Most of the slowness does NOT come from poor implementations:
> the CPython
> implementation is extremely well-optimized; the Jython and Iron Python
> implementations use best-in-the-world JIT runtimes. Most of the speed
> issues
> come from fundamental features of the LANGUAGE itself, mostly ways in
> which
> it is highly dynamic.
>
> In Python, a piece of code like this:
> len(x)
> needs to watch out for the following:
> * Perhaps x is a list OR
>   * Perhaps x is a dict OR
>   * Perhaps x is a user-defined type that declares a __len__
> method OR
>   * Perhaps a superclass of x declares __len__ OR
> * Perhaps we are running the built-in len() function OR
>   * Perhaps there is a global variable 'len' which shadows the
> built-in OR
>   * Perhaps there is a local variable 'len' which shadows the
> built-in OR
>   * Perhaps someone has modified __builtins__
>
> In Python it is possible for other code, outside your module to go in
> and
> modify or replace some methods from your module (a feature called
> "monkey-patching" which is SOMETIMES useful for certain kinds of
> testing).
> There are just so many things that can be dynamic (even if 99% of the
> time
> they are NOT dynamic) that there is very little that the compiler can
> assume.
>
> So whether you implement it in C, compile to CLR bytecode, or
> translate into
> Lisp, the computer is still going to have to to a whole bunch of
> lookups to
> make certain that there isn't some monkey business going on, rather
> than
> simply reading a single memory location that contains the length of
> the list.
> Brett Cannon's thesis is an example: he attempted desperate measures
> to
> perform some inferences that would allow performing these
> optimizations
> safely and, although a few of them could work in special cases, most
> of the
> hoped-for improvements were impossible because of the dynamic nature
> of the
> language.
>
> I have seen a number of attempts to address this, either by placing
> some
> restrictions on the dynamic nature of the code (but that would change
> the
> nature of the Python language) or by having some sort of a JIT
> optimize the
> common path where we don't monkey around. Unladen Swallow and PyPy are
> two
> such efforts that I find particularly promising.
>
> But it isn't NEARLY as simple as you make it out to be.
>
> -- Michael Chermside

You might be right for the wrong reasons in a way.

Python isn't slow because it's a dynamic language.  All the lookups
you're citing are highly optimized hash lookups.  It executes really
fast.

The OP is talking about scale.  Some people say Python is slow at a
certain scale.  I say that's about true for any language.  Large amounts
of IO is a tough problem.

Where Python might get hit *as a language* is that the Python programmer
has to drop into C to implement optimized data-structures for dealing
with the kind of IO that would slow down the Python interpreter.  That's
why we have numpy, scipy, etc.  The special cases it takes to solve
problems with custom types wasn't special enough to alter the language.
Scale is a special case believe it or not.

As an implementation though, the sky really is the limit and Python is
only getting started.  Give it another 40 years and it'll probably
realize that it's just another Lisp. ;)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread Stefan Behnel
Santiago Romero, 12.11.2009 18:23:
> #define LD_r_n(reg) (reg) = Z80ReadMem(r_PC++)
> 
> #define LD_rr_nn(reg)   r_opl = Z80ReadMem(r_PC); r_PC++; \
> r_oph = Z80ReadMem(r_PC); r_PC++; \
> reg = r_op
> 
> #define LOAD_r(dreg, saddreg)   (dreg)=Z80ReadMem((saddreg))
> 
> #define LOAD_rr_nn(dreg)   r_opl = Z80ReadMem(r_PC); r_PC++; \
>r_oph = Z80ReadMem(r_PC); r_PC++; \
>r_tmpl = Z80ReadMem(r_op); \
>r_tmph = Z80ReadMem((r_op)+1); \
>dreg=r_tmp
> 
> #define STORE_nn_rr(dreg) \
> r_opl = Z80ReadMem(r_PC); r_PC++;\
> r_oph = Z80ReadMem(r_PC); r_PC++; \
> r_tmp = dreg; \
> Z80WriteMem((r_op),r_tmpl, regs); \
> Z80WriteMem((r_op+1),r_tmph, regs)
> 
>  But it seems that is not possible :-(

As Michele said, this is a micro optimisation, likely premature. A function
is usually good enough for this. If you need more speed (and you seem to be
targeting direct memory access or something like that), take a look at Cython.

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Santiago Romero
> You can do clever memory slicing like this with numpy.  For instance:
>
> breg = numpy.zeros((16,),numpy.uint8)
> wreg = numpy.ndarray((8,),numpy.uint16,breg)
>
> This causes breg and wreg to share the same 16 bytes of memory.  You
> can define constants to access specific registers:

 What I'm doing wrong?

[srom...@compiler:~]$ cat test.py
#!/usr/bin/python

import numpy

# Register array
breg = numpy.zeros((16,),numpy.uint8)
wreg = numpy.ndarray((8,), numpy.uint16, breg )

reg_A   = 1
reg_F   = 2
reg_AF  = 1
reg_B   = 3
reg_C   = 4
reg_BC  = 3

breg[reg_B] = 5
breg[reg_C] = 10
print breg[reg_B]
print breg[reg_C]
print wreg[reg_BC]


[srom...@compiler:~]$ python test.py
5
10
0

 ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can a module know the module that imported it?

2009-11-12 Thread AK Eric
so:

# moduleA.py
import moduleB

# moduleB.py
import sys
stuff = sys._getframe(1).f_locals
print stuff

Prints:

{'__builtins__': ,
'__file__': 'C:\\Documents and SettingsMy Documents\
\python\\moduleA.py',
'__name__': '__main__',
'__doc__': None}

Looks like you could query stuff['__file__'] to pull what you're
after.
?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing an emulator in python - implementation questions (for performance)

2009-11-12 Thread Santiago Romero

 Oops, numpy arrays start with index=0 :-)


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: standard libraries don't behave like standard 'libraries'

2009-11-12 Thread Benjamin Kaplan
On Thu, Nov 12, 2009 at 12:05 PM, Sriram Srinivasan
 wrote:
>> So all libraries written have to use the common subset, which - unless
>> things are *removed*, which with python3 actually happened - is always
>> the oldest interpreter. And if a feature goes away, they have to be
>> rewritten with the then common subset.
>
> you see that's the problem with py3. instead of slowly removing the
> features one by one till the end they made a jump. the problem is not
> with the jump but with other libraries/packages that depend on py2.
> what i felt was if the new version was available as soon as it is into
> the stream, developers/users try to use them and get accustomed. now
> they follow this. they have frozen py3 for may be 2-3 years so ppl
> would move slowly to py2.6 ,2.7, 2.8... most. also the corporate
> companies are the most influential.
>

The freeze was put in place so that the other implementations of
Python, like Jython and IronPython, have a chance to catch up with the
reference CPython implementation. It's not so people will slowly move
up. FWIW, people knew for years what was going to change in Python 3.

> also a point to note: if new updates keep on coming (for eg. daily
> antivirus update) developers tend to accept and move on *easily*.
>
> i am not an expert in programming/interpreter design/library writing/
> packaging. i just wanted the standard library *too* to be pluggable,
> not just the applications based on it.
> i am not even experienced in python to make bold accusations so all i
> can say is 'how it would be if?' and i always stick to one thing 'why
> not?'.
>
>> In other words: as a library writer, I can't use shiny, new & better
>> features, I'm stuck with the old stuff, and whenever the interpreter
>> evolves I have to rewrite even the old ones. Not appealing. Why develop
>> the interpreter at all then?
>
> if you are app developer you needn't rewrite anything. that is the
> main point here! just specifying the interpreter and library version
> is enough every thing has to work like magic. as the previous
> library&interpreter wont be removed (unless and until it is your will)
> u can use them. thats how everybody would like it to be.
>

So you're saying that we'd have multiple versions of the interpreter
running at the same time, but they all have access to the same memory.
That wouldn't just require a change to Python, that would require a
change to the way all modern operating systems work.

> as for a library writer is concerned: as you say it depends on certain
> things like other libraries,interpreter,etc. but i would like to say
> that change is inevitable. if some core developer has to change
> something he might change. and if you have to update your library you
> have to. this applies to the interpreter too. what python now does is
> make every update work with the minimal set (giving minimal backward
> compatibility) plus the new features too. which is exactly the right
> thing. what i wanted is the users have to be aware now and then when
> of the modification, deletion, addition stuff not in every release as
> in py3k. then the whole backward incompatibility issue would be
> *dissolved* or what i mean is *diffused* and ppl dont even know that
> they have switched over to a very new thing.
>
>

Actually, that's not entirely true. New versions to break things
Consider these statements

as = ['a','a','a']
with = [1,2,3]
It's legal in Python 2.4 or earlier, a warning in Python 2.5, but
illegal in Python 2.6

>
>> In other words: the problem is solved by somehow solving the problem -
>> but not by a concrete solution you propose?
>
> as i told before i neither know the problem nor the solution. i just
> wrote my ideas/imagination down
>
>> PyPy is cool, but by far not advanced enough to replace external,
>> C-based libraries such as NUMPY and PyQt and whatnot.
>>
>> I don't say that the current situation is by any means ideal. There is a
>> lot to be said about package creation, distribution, installation,
>> removal, dependency handling, and even system-packaging-integration if
>> one likes.
>>
>> You IMHO just add another layer of complexity on top of it, without
>> proposing solutions to any of the layers in between, nor your new one -
>> namely, the interpreter version agnosticism.
>
> yes you are right. python has a lot of bindings to a lot of stuff. but
> it is the strength and weakness. not exactly pythons weakness but with
> the thing on the other side. yes it may be looking complex because
> most of the 'standard library' system was not designed to be adhoc/add-
> on/pluggable from the start.


It is pluggable. The standard library consists of Python modules like
any other. 2.6's multiprocessing module is just an incorporation of
the pyprocessing module for instance. The point of the standard
library is that you can count on it being there, and you can count on
it having certain features, given your interpreter version. You can
also be sure that anytime a new ver

Re: How can a module know the module that imported it?

2009-11-12 Thread Ethan Furman

AK Eric wrote:

so:

# moduleA.py
import moduleB

# moduleB.py
import sys
stuff = sys._getframe(1).f_locals
print stuff

Prints:

{'__builtins__': ,
'__file__': 'C:\\Documents and SettingsMy Documents\
\python\\moduleA.py',
'__name__': '__main__',
'__doc__': None}

Looks like you could query stuff['__file__'] to pull what you're
after.
?


The leading _ in _getframe indicates a private function to sys (aka 
implementation detail); in other words, something that could easily 
change between one Python version and the next.


I'm using the inspect module (for the moment, at least), and my question 
boils down to:  Will it work correctly on all versions of Python in the 
2.x range?  3.x range?


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: How can a module know the module that imported it?

2009-11-12 Thread AK Eric
On Nov 12, 10:10 am, Ethan Furman  wrote:
> AK Eric wrote:
> > so:
>
> > # moduleA.py
> > import moduleB
>
> > # moduleB.py
> > import sys
> > stuff = sys._getframe(1).f_locals
> > print stuff
>
> > Prints:
>
> > {'__builtins__': ,
> > '__file__': 'C:\\Documents and SettingsMy Documents\
> > \python\\moduleA.py',
> > '__name__': '__main__',
> > '__doc__': None}
>
> > Looks like you could query stuff['__file__'] to pull what you're
> > after.
> > ?
>
> The leading _ in _getframe indicates a private function to sys (aka
> implementation detail); in other words, something that could easily
> change between one Python version and the next.
>
> I'm using the inspect module (for the moment, at least), and my question
> boils down to:  Will it work correctly on all versions of Python in the
> 2.x range?  3.x range?
>

Good point, I totally missed that.  Someone had passed that solution
to me at one point and I was so excited I kind of looked that over :P
-- 
http://mail.python.org/mailman/listinfo/python-list


QuerySets in Dictionaries

2009-11-12 Thread scoopseven
I need to create a dictionary of querysets.  I have code that looks
like:

query1 = Myobject.objects.filter(status=1)
query2 = Myobject.objects.filter(status=2)
query3 = Myobject.objects.filter(status=3)

d={}
d['a'] = query1
d['b'] = query2
d['c'] = query3

Is there a way to do this that I'm missing?

Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (OT) Recommend FTP Client

2009-11-12 Thread Tim Chase

Someone on this list just recommended I find an ftp client that enables me
to change line endings. He indicated that it would be easy, but googling I
haven't been able to find one. I would prefer a free client, but whatever.
Please send me a recommendation.


How about the command line client that comes with every modern 
OS?  Just use the ASCII (default) and BIN commands to switch 
between line-ending translation.


-tkc





--
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Rami Chowdhury
On Thu, 12 Nov 2009 09:32:28 -0800, Alf P. Steinbach   
wrote:


This also seems religious. It's like in Norway it became illegal to  
market lemon soda, since umpteen years ago it's soda with lemon  
flavoring. This has to do with the *origin* of the citric acid, whether  
natural or chemist's concoction, no matter that it's the same chemical.  
So, some people think that it's wrong to talk about interpreted  
languages, hey, it should be a "language designed for interpretation",  
or better yet, "dynamic language", or bestest, "language with dynamic  
flavor". And slow language, oh no, should be "language whose current  
implementations are perceived as somewhat slow by some (well, all)  
people", but of course, that's just silly.


Perhaps I'm missing the point of what you're saying but I don't see why  
you're conflating interpreted and dynamic here? Javascript is unarguably a  
dynamic language, yet Chrome / Safari 4 / Firefox 3.5 all typically JIT  
it. Does that make Javascript non-dynamic, because it's compiled? What  
about Common Lisp, which is a compiled language when it's run with CMUCL  
or SBCL?



--
Rami Chowdhury
"Never attribute to malice that which can be attributed to stupidity" --  
Hanlon's Razor

408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
--
http://mail.python.org/mailman/listinfo/python-list


Re: #define (from C) in Python

2009-11-12 Thread TerryP
If it's such a big hairy deal, just recompile a copy of the C Pre
Processor to use something other then #, and hook it up to your python
scripts in a pipe line from a shell wrapper.

Personally, I'd rather have Lisps lambda or perls sub then Cs
preprocessor, and even in those cases, Python suffices perfectly
fine ;).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: QuerySets in Dictionaries

2009-11-12 Thread Simon Brunning
2009/11/12 scoopseven :
> I need to create a dictionary of querysets.  I have code that looks
> like:
>
> query1 = Myobject.objects.filter(status=1)
> query2 = Myobject.objects.filter(status=2)
> query3 = Myobject.objects.filter(status=3)
>
> d={}
> d['a'] = query1
> d['b'] = query2
> d['c'] = query3
>
> Is there a way to do this that I'm missing?

Untested:

wanted = (('a', 1), ('b', 2), ('c', 2))
d = dict((key, Myobject.objects.filter(status=number)) for (key,
number) in wanted)


-- 
Cheers,
Simon B.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: QuerySets in Dictionaries

2009-11-12 Thread Diez B. Roggisch

scoopseven schrieb:

I need to create a dictionary of querysets.  I have code that looks
like:

query1 = Myobject.objects.filter(status=1)
query2 = Myobject.objects.filter(status=2)
query3 = Myobject.objects.filter(status=3)

d={}
d['a'] = query1
d['b'] = query2
d['c'] = query3

Is there a way to do this that I'm missing?



d = dict(a=Myobject.objects.filter(status=1),
 b=Myobject.objects.filter(status=2),
 c=Myobject.objects.filter(status=3))

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Computing the 1000th prime

2009-11-12 Thread Ray Holt
I have an assigment to find the 1000th. prime using python. What's wrong
with the following code:
PrimeCount = 0
PrimeCandidate = 1
while PrimeCount < 2000:
IsPrime = True
PrimeCandidate = PrimeCandidate + 2
for x in range(2, PrimeCandidate):
if PrimeCandidate % x == 0:
##print PrimeCandidate, 'equals', x, '*', PrimeCandidate/x
  print PrimeCandidate  
  IsPrime = False
  break
if IsPrime:
PrimeCount = PrimeCount + 1
PrimeCandidate = PrimeCandidate + 2
print PrimeCandidate
Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: regex remove closest tag

2009-11-12 Thread MRAB

S.Selvam wrote:

Hi all,


1) I need to remove the  tags which is just before the keyword(i.e 
some_text2 ) excluding others.


2) input string may or may not contain  tags.

3) Sample input: 
 
inputstr = """start some_text1 href="">some_text2 keyword anything"""


4) I came up with the following regex,

   
p=re.compile(r'(?P.*?)(\s*keyword|\s*keyword)(?P.*)',re.DOTALL|re.I)

   s=p.search(inputstr)
  but second group matches both  tags,while  i need to match the 
recent one only.


I would like to get your suggestions.

Note:

   If i leave group('good1') as greedy, then it matches both the  tag.


".*?" can match any number of any character, so it can match any
intervening "" tags. Try "[^<]*?" instead.

--
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Alf P. Steinbach

* Rami Chowdhury:
On Thu, 12 Nov 2009 09:32:28 -0800, Alf P. Steinbach  
wrote:


This also seems religious. It's like in Norway it became illegal to 
market lemon soda, since umpteen years ago it's soda with lemon 
flavoring. This has to do with the *origin* of the citric acid, 
whether natural or chemist's concoction, no matter that it's the same 
chemical. So, some people think that it's wrong to talk about 
interpreted languages, hey, it should be a "language designed for 
interpretation", or better yet, "dynamic language", or bestest, 
"language with dynamic flavor". And slow language, oh no, should be 
"language whose current implementations are perceived as somewhat slow 
by some (well, all) people", but of course, that's just silly.


Perhaps I'm missing the point of what you're saying but I don't see why 
you're conflating interpreted and dynamic here? Javascript is unarguably 
a dynamic language, yet Chrome / Safari 4 / Firefox 3.5 all typically 
JIT it. Does that make Javascript non-dynamic, because it's compiled? 
What about Common Lisp, which is a compiled language when it's run with 
CMUCL or SBCL?


Yeah, you missed it.

Blurring and coloring and downright hiding reality by insisting on misleading 
but apparently more precise terminology for some vague concept is a popular 
sport, and chiding others for using more practical and real-world oriented 
terms, can be effective in politics and some other arenas.


But in a technical context it's silly. Or dumb. Whatever.

E.g. you'll find it impossible to define interpretation rigorously in the sense 
that you're apparently thinking of. It's not that kind of term or concept. The 
nearest you can get is in a different direction, something like "a program whose 
actions are determined by data external to the program (+ x qualifications and 
weasel words)", which works in-practice, conceptually, but try that on as a 
rigorous definition and you'll see that when you get formal about it then it's 
completely meaningless: either anything qualifies or nothing qualifies.


You'll also find it impossible to rigorously define "dynamic language" in a 
general way so that that definition excludes C++. 


So, to anyone who understands what one is talking about, "interpreted", or e.g. 
"slow language" (as was the case here), conveys the essence.


And to anyone who doesn't understand it trying to be more precise is an exercise 
in futility and pure silliness  --  except for the purpose of misleading.



Cheers & hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: (OT) Recommend FTP Client

2009-11-12 Thread Dave Angel



Victor Subervi wrote:

Hi;
Someone on this list just recommended I find an ftp client that enables me
to change line endings. He indicated that it would be easy, but googling I
haven't been able to find one. I would prefer a free client, but whatever.
Please send me a recommendation.
TIA,
Victor

  

Try  http://fireftp.mozdev.org/

fireftp is an (free) addon to Firefox.  If you're already using Firefox, 
it's painless to download it, and easy to use.  You can set up 
account(s) complete with passwords and default directories, and then 
transfer individual files or directory trees full just by selecting and 
pressing the Arrow icons (one for upload, one for download).  You can 
sort by timestamp, so it's not hard to just transfer new stuff.



One of the Tools->Options tabs is  "Downloads/Uploads".  First box looks 
like:


  When transferring files use:
  o - Automatic mode
  o - Binary mode
  o - ASCII mode


According to the help, Automatic mode is very useful for CGI scripts, 
where you specify which extensions will get the line-endings converted.

 http://fireftp.mozdev.org/help.html

But please understand:  I personally use Binary mode in FireFTP because 
I'm a control freak.  Python can handle Unix line-endings just fine, so 
any files that are bound for CGI I just edit in that mode in the first 
place.



DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: python with echo

2009-11-12 Thread MRAB

Steven D'Aprano wrote:

On Wed, 11 Nov 2009 17:24:37 -0800, hong zhang wrote:


List,

I have a question of python using echo.

POWER = 14
return_value = os.system('echo 14 >
/sys/class/net/wlan1/device/tx_power')

can assign 14 to tx_power

But
return_value = os.system('echo $POWER >
/sys/class/net/wlan1/device/tx_power')


POWER = 14 doesn't create an environment variable visible to echo. It is 
a Python variable.




POWER = 14
import os
return_value = os.system('echo $POWER')



return_value

0




return_value is 256 not 0. It cannot assign 14 to tx_power.



I don't understand that. Exit status codes on all systems I'm familiar 
with are limited to 0 through 255. What operating system are you using?


Assuming your system allows two-byte exit statuses, you should check the 
documentation for echo and the shell to see why it is returning 256.



In some OSs the exit status consists of 2 fields, one being the child
process's exit status and the other being supplied by the OS.

The reason is simple. What if the child process terminated abnormally?
You'd like an exit status to tell you that, but you wouldn't want it to
be confused with the child process's own exit status, assuming that it
had terminated normally.

Have you tried this in the shell, without involving Python? I will almost 
guarantee that Python is irrelevant to the problem.




--
http://mail.python.org/mailman/listinfo/python-list


Re: Does turtle graphics have the wrong associations?

2009-11-12 Thread Terry Reedy

Alf P. Steinbach wrote:
One reaction to http://preview.tinyurl.com/ProgrammingBookP3> has been that turtle 
graphics may be off-putting to some readers because it is associated 
with children's learning.


What do you think?


I just started using the module for simple plots.
I am not a child.
You cannot please everyone.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Does turtle graphics have the wrong associations?

2009-11-12 Thread AK Eric
On Nov 12, 11:31 am, Terry Reedy  wrote:
> Alf P. Steinbach wrote:
> > One reaction to  >http://preview.tinyurl.com/ProgrammingBookP3> has been that turtle
> > graphics may be off-putting to some readers because it is associated
> > with children's learning.
>
> > What do you think?
>
> I just started using the module for simple plots.
> I am not a child.
> You cannot please everyone.

I used Turtle back on the Apple in the early 80's... so I personally
have very positive feelings towards it ;)  To each their own eh?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Rami Chowdhury
On Thu, 12 Nov 2009 11:24:18 -0800, Alf P. Steinbach   
wrote:



* Rami Chowdhury:
On Thu, 12 Nov 2009 09:32:28 -0800, Alf P. Steinbach   
wrote:


This also seems religious. It's like in Norway it became illegal to  
market lemon soda, since umpteen years ago it's soda with lemon  
flavoring. This has to do with the *origin* of the citric acid,  
whether natural or chemist's concoction, no matter that it's the same  
chemical. So, some people think that it's wrong to talk about  
interpreted languages, hey, it should be a "language designed for  
interpretation", or better yet, "dynamic language", or bestest,  
"language with dynamic flavor". And slow language, oh no, should be  
"language whose current implementations are perceived as somewhat slow  
by some (well, all) people", but of course, that's just silly.
 Perhaps I'm missing the point of what you're saying but I don't see  
why you're conflating interpreted and dynamic here? Javascript is  
unarguably a dynamic language, yet Chrome / Safari 4 / Firefox 3.5 all  
typically JIT it. Does that make Javascript non-dynamic, because it's  
compiled? What about Common Lisp, which is a compiled language when  
it's run with CMUCL or SBCL?


Yeah, you missed it.

Blurring and coloring and downright hiding reality by insisting on  
misleading but apparently more precise terminology for some vague  
concept is a popular sport, and chiding others for using more practical  
and real-world oriented terms, can be effective in politics and some  
other arenas.





But in a technical context it's silly. Or dumb. Whatever.

E.g. you'll find it impossible to define interpretation rigorously in  
the sense that you're apparently thinking of.


Well, sure. Can you explain, then, what sense you meant it in?

You'll also find it impossible to rigorously define "dynamic language"  
in a general way so that that definition excludes C++. 


Or, for that matter, suitably clever assembler. I'm not arguing with you  
there.


So, to anyone who understands what one is talking about, "interpreted",  
or e.g. "slow language" (as was the case here), conveys the essence.


Not when the context isn't clear, it doesn't.

And to anyone who doesn't understand it trying to be more precise is an  
exercise in futility and pure silliness  --  except for the purpose of  
misleading.


Or for the purpose of greater understanding, surely - and isn't that the  
point?



--
Rami Chowdhury
"Never attribute to malice that which can be attributed to stupidity" --  
Hanlon's Razor

408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Computing the 1000th prime

2009-11-12 Thread MRAB

Ray Holt wrote:
I have an assigment to find the 1000th. prime using python. What's wrong 
with the following code:

PrimeCount = 0
PrimeCandidate = 1
while PrimeCount < 2000:
IsPrime = True
PrimeCandidate = PrimeCandidate + 2
for x in range(2, PrimeCandidate):
if PrimeCandidate % x == 0:
##print PrimeCandidate, 'equals', x, '*', PrimeCandidate/x
  print PrimeCandidate 
  IsPrime = False

  break
if IsPrime:
PrimeCount = PrimeCount + 1
PrimeCandidate = PrimeCandidate + 2
print PrimeCandidate
Thanks


The indentation starting from the second 'if'.

--
http://mail.python.org/mailman/listinfo/python-list


Re: (OT) Recommend FTP Client

2009-11-12 Thread Victor Subervi
Thanks.
V

On Thu, Nov 12, 2009 at 2:26 PM, Dave Angel  wrote:

>
>
> Victor Subervi wrote:
>
>> Hi;
>> Someone on this list just recommended I find an ftp client that enables me
>> to change line endings. He indicated that it would be easy, but googling I
>> haven't been able to find one. I would prefer a free client, but whatever.
>> Please send me a recommendation.
>> TIA,
>> Victor
>>
>>
>>
> Try  http://fireftp.mozdev.org/
>
> fireftp is an (free) addon to Firefox.  If you're already using Firefox,
> it's painless to download it, and easy to use.  You can set up account(s)
> complete with passwords and default directories, and then transfer
> individual files or directory trees full just by selecting and pressing the
> Arrow icons (one for upload, one for download).  You can sort by timestamp,
> so it's not hard to just transfer new stuff.
>
>
> One of the Tools->Options tabs is  "Downloads/Uploads".  First box looks
> like:
>
>  When transferring files use:
>  o - Automatic mode
>  o - Binary mode
>  o - ASCII mode
>
>
> According to the help, Automatic mode is very useful for CGI scripts, where
> you specify which extensions will get the line-endings converted.
> http://fireftp.mozdev.org/help.html
>
> But please understand:  I personally use Binary mode in FireFTP because I'm
> a control freak.  Python can handle Unix line-endings just fine, so any
> files that are bound for CGI I just edit in that mode in the first place.
>
>
> DaveA
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


3.x and 2.x on same machine (is this info at Python.org??)

2009-11-12 Thread rantingrick
Hello,

Currently i am using 2.6 on Windows and need to start writing code in
3.0. I cannot leave 2.x yet because 3rd party modules are still not
converted. So i want to install 3.0 without disturbing my current
Python2.x. What i'm afraid of is that some SYSVARIABLE will get
changed to Python3.0 and when i double click a Python script it will
try and run Python 3.x instead of 2.x. I only want to run 3.0 scripts
from the command line... > python3.x myscript.py

So how do i do this? Is my fear unfounded?

Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Alf P. Steinbach

* Rami Chowdhury:
On Thu, 12 Nov 2009 11:24:18 -0800, Alf P. Steinbach  
wrote:



* Rami Chowdhury:
On Thu, 12 Nov 2009 09:32:28 -0800, Alf P. Steinbach  
wrote:


This also seems religious. It's like in Norway it became illegal to 
market lemon soda, since umpteen years ago it's soda with lemon 
flavoring. This has to do with the *origin* of the citric acid, 
whether natural or chemist's concoction, no matter that it's the 
same chemical. So, some people think that it's wrong to talk about 
interpreted languages, hey, it should be a "language designed for 
interpretation", or better yet, "dynamic language", or bestest, 
"language with dynamic flavor". And slow language, oh no, should be 
"language whose current implementations are perceived as somewhat 
slow by some (well, all) people", but of course, that's just silly.
 Perhaps I'm missing the point of what you're saying but I don't see 
why you're conflating interpreted and dynamic here? Javascript is 
unarguably a dynamic language, yet Chrome / Safari 4 / Firefox 3.5 
all typically JIT it. Does that make Javascript non-dynamic, because 
it's compiled? What about Common Lisp, which is a compiled language 
when it's run with CMUCL or SBCL?


Yeah, you missed it.

Blurring and coloring and downright hiding reality by insisting on 
misleading but apparently more precise terminology for some vague 
concept is a popular sport, and chiding others for using more 
practical and real-world oriented terms, can be effective in politics 
and some other arenas.





But in a technical context it's silly. Or dumb. Whatever.

E.g. you'll find it impossible to define interpretation rigorously in 
the sense that you're apparently thinking of.


Well, sure. Can you explain, then, what sense you meant it in?


I think that was in the part you *snipped* here. Just fill in the mentioned 
qualifications and weasel words. And considering that a routine might be an 
intepreter of data produced elsewhere in program, needs some fixing...



You'll also find it impossible to rigorously define "dynamic language" 
in a general way so that that definition excludes C++. 


Or, for that matter, suitably clever assembler. I'm not arguing with you 
there.


So, to anyone who understands what one is talking about, 
"interpreted", or e.g. "slow language" (as was the case here), conveys 
the essence.


Not when the context isn't clear, it doesn't.

And to anyone who doesn't understand it trying to be more precise is 
an exercise in futility and pure silliness  --  except for the purpose 
of misleading.


Or for the purpose of greater understanding, surely - and isn't that the 
point?


I don't think that was the point.

Specifically, I reacted to the statement that >, made in response to someone upthread, 
in the context of Google finding CPython overall too slow.


It is quite slow. ;-)


Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Computing the 1000th prime

2009-11-12 Thread Benjamin Kaplan
On Thursday, November 12, 2009, Ray Holt  wrote:
>
>
>
>
>
> I have an assigment
> to find the 1000th. prime using python. What's wrong with the following
> code:
> PrimeCount =
> 0
> PrimeCandidate = 1
> while PrimeCount < 2000:
>
> IsPrime = True
>     PrimeCandidate = PrimeCandidate +
> 2
>     for x in range(2,
> PrimeCandidate):
>     if PrimeCandidate
> % x ==
> 0:
> ##    print
> PrimeCandidate, 'equals', x, '*',
> PrimeCandidate/x
>
> print PrimeCandidate
>
>
> IsPrime =
> False
>
> break
>     if
> IsPrime:
>
> PrimeCount = PrimeCount +
> 1
>
> PrimeCandidate = PrimeCandidate +
> 2
>     print
> PrimeCandidate
> Thanks
>

You break on the first composite number, which means you immediately
exit the loop. Just let it fall through Also, a couple of things to
speed in up:

1) you look at all numbers from 2 to n to check if n is a prime
number. You only need to check from 2 to int(math.sqrt(n))

2) to really speed it up, keep a list of all the prime numbers. Then
you only need to check if a number is divisible by those

By combining the two, you'll only use a fraction of the comparisons.
For 23, you'd only loop twice (2 and 3) instead of 20 times to
determine that it's prime. The difference is even more dramatic on
large numbers.
-- 
http://mail.python.org/mailman/listinfo/python-list


Psyco on 64-bit machines

2009-11-12 Thread Russ P.
I have a Python program that runs too slow for some inputs. I would
like to speed it up without rewriting any code. Psyco seemed like
exactly what I need, until I saw that it only works on a 32-bit
architecture. I work in an environment of Sun Ultras that are all 64-
bit. However, the Psyco docs say this:

"Psyco does not support the 64-bit x86 architecture, unless you have a
Python compiled in 32-bit compatibility mode."

Would it make sense to compile Python in the 32-bit compatibility mode
so I can use Psyco? What would I lose in that mode, if anything?
Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Computing the 1000th prime

2009-11-12 Thread Benjamin Kaplan
On Thursday, November 12, 2009, Benjamin Kaplan
 wrote:
> On Thursday, November 12, 2009, Ray Holt  wrote:
>>
>>
>>
>>
>>
>> I have an assigment
>> to find the 1000th. prime using python. What's wrong with the following
>> code:
>> PrimeCount =
>> 0
>> PrimeCandidate = 1
>> while PrimeCount < 2000:
>>
>> IsPrime = True
>>     PrimeCandidate = PrimeCandidate +
>> 2
>>     for x in range(2,
>> PrimeCandidate):
>>     if PrimeCandidate
>> % x ==
>> 0:
>> ##    print
>> PrimeCandidate, 'equals', x, '*',
>> PrimeCandidate/x
>>
>> print PrimeCandidate
>>
>>
>> IsPrime =
>> False
>>
>> break
>>     if
>> IsPrime:
>
> You break on the first composite number, which means you immediately
> exit the loop. Just let it fall through Also, a couple of things to
> speed in up:
>

Nevermind MRAB is right. I missed the indentation error there. I guess
that's what I get for  trying to evaluate code on my iPod touch
instead of getting my computer out and actually seeing what it's
doing.  >.<
> 1) you look at all numbers from 2 to n to check if n is a prime
> number. You only need to check from 2 to int(math.sqrt(n))
>
> 2) to really speed it up, keep a list of all the prime numbers. Then
> you only need to check if a number is divisible by those
>
> By combining the two, you'll only use a fraction of the comparisons.
> For 23, you'd only loop twice (2 and 3) instead of 20 times to
> determine that it's prime. The difference is even more dramatic on
> large numbers.
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Rami Chowdhury
On Thu, 12 Nov 2009 12:02:11 -0800, Alf P. Steinbach   
wrote:
I think that was in the part you *snipped* here. Just fill in the  
mentioned qualifications and weasel words.


OK, sure. I don't think they're weasel words, because I find them useful,  
but I think I see where you're coming from.


Specifically, I reacted to the statement that >, made in response to  
someone upthread, in the context of Google finding CPython overall too  
slow.


IIRC it was "the speed of a language" that was asserted to be nonsense,  
wasn't it? Which IMO is fair -- a physicist friend of mine works with a  
C++ interpreter which is relatively sluggish, but that doesn't mean C++ is  
slow...


--
Rami Chowdhury
"Never attribute to malice that which can be attributed to stupidity" --  
Hanlon's Razor

408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Psyco on 64-bit machines

2009-11-12 Thread Chris Kaynor
On Thu, Nov 12, 2009 at 12:06 PM, Russ P.  wrote:

> I have a Python program that runs too slow for some inputs. I would
> like to speed it up without rewriting any code. Psyco seemed like
> exactly what I need, until I saw that it only works on a 32-bit
> architecture. I work in an environment of Sun Ultras that are all 64-
> bit. However, the Psyco docs say this:
>
> "Psyco does not support the 64-bit x86 architecture, unless you have a
> Python compiled in 32-bit compatibility mode."
>
> Would it make sense to compile Python in the 32-bit compatibility mode
> so I can use Psyco? What would I lose in that mode, if anything?
> Thanks.
> --
> http://mail.python.org/mailman/listinfo/python-list
>

The only things you should lose by using a 32-bit version of Python is
access to the memory beyond the 4GB limit (approximate - the OS takes some
of that), and any compiled extension modules you cannot find or recompile
for 32-bit (.pyd on Windows - I think .so on Linux).

Chris
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Benjamin Kaplan
On Thu, Nov 12, 2009 at 2:24 PM, Alf P. Steinbach  wrote:
>
> You'll also find it impossible to rigorously define "dynamic language" in a
> general way so that that definition excludes C++. 
>
> So, to anyone who understands what one is talking about, "interpreted", or
> e.g. "slow language" (as was the case here), conveys the essence.
>
> And to anyone who doesn't understand it trying to be more precise is an
> exercise in futility and pure silliness  --  except for the purpose of
> misleading.

You just made Rami's point. You can't define a language as . You can however describe what features it has - static vs.
dynamic typing, duck-typing, dynamic dispatch, and so on. Those are
features of the language. Other things, like "interpreted" vs
"compiled" are features of the implementation. C++ for instance is
considered language that gets compiled to machine code. However,
Visual Studio can compile C++ programs to run on the .NET framework
which makes them JIT compiled. Some one could even write an
interpreter for C++ if they wanted to.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 3.x and 2.x on same machine (is this info at Python.org??)

2009-11-12 Thread Benjamin Kaplan
On Thu, Nov 12, 2009 at 2:52 PM, rantingrick  wrote:
> Hello,
>
> Currently i am using 2.6 on Windows and need to start writing code in
> 3.0. I cannot leave 2.x yet because 3rd party modules are still not
> converted. So i want to install 3.0 without disturbing my current
> Python2.x. What i'm afraid of is that some SYSVARIABLE will get
> changed to Python3.0 and when i double click a Python script it will
> try and run Python 3.x instead of 2.x. I only want to run 3.0 scripts
> from the command line... > python3.x myscript.py
>
> So how do i do this? Is my fear unfounded?
>

At least on *nix (including OS X), installing Python 3 does exactly
what you want by default. I don't know how it handles it on Windows.

> Thanks
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 3.x and 2.x on same machine (is this info at Python.org??)

2009-11-12 Thread Terry Reedy

rantingrick wrote:

Hello,

Currently i am using 2.6 on Windows and need to start writing code in
3.0. I cannot leave 2.x yet because 3rd party modules are still not
converted. So i want to install 3.0 without disturbing my current
Python2.x. What i'm afraid of is that some SYSVARIABLE will get
changed to Python3.0 and when i double click a Python script it will
try and run Python 3.x instead of 2.x. I only want to run 3.0 scripts
from the command line... > python3.x myscript.py

So how do i do this? Is my fear unfounded?


When you install 3.1 (not 3.0), it asks whether to make 'this' the 
default Python. Make sure the box is unchecked.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Computing the 1000th prime

2009-11-12 Thread Dave Angel

Ray Holt wrote:

I have an assigment to find the 1000th. prime using python. What's wrong
with the following code:
PrimeCount = 0
PrimeCandidate = 1
while PrimeCount < 2000:
IsPrime = True
PrimeCandidate = PrimeCandidate + 2
for x in range(2, PrimeCandidate):
if PrimeCandidate % x == 0:
##print PrimeCandidate, 'equals', x, '*', PrimeCandidate/x
  print PrimeCandidate  
  IsPrime = False

  break
if IsPrime:
PrimeCount = PrimeCount + 1
PrimeCandidate = PrimeCandidate + 2
print PrimeCandidate
Thanks

  
There are a bunch of things wrong here.  Did you write this code, or was 
it copied from somewhere else?  Because it looks like there are several 
typos, that you could catch by inspection.


First, at what point in the loop do you decide that you have a prime?  
Why not add a print there, printing the prime, instead of printing a 
value that's already been incremented beyond it.  And put labels on your 
prints, or you'll never be able to decipher the output.  Chnage the 
limit for PrimeCount to something small while you're debugging, because 
you can figure out small primes and composites by hand.


Second, you have a loop which divides by x.  But you change the 
PrimeCandidate within the loop, so it's not dividing the same value each 
time through.  Check your indentation.


Third, your limit value is too high.  You aren't looking for 2000 
primes, but 1000 of them.  Further, your count is off by 1, as this loop 
doesn't identify 2 as a prime.


I'd recommend making this whole thing a function, and have it actually 
build and return a list of primes.  Then the caller can check both the 
size of the list and do a double check of the primality of each member.  
And naturally you'll be able to more readily check it yourself, either 
by eye or by comparing it to one of the standard list of primes you can 
find on the internet.



The function should take a count, and loop until the len(primelist) 
matches the count.  Then just return the primelist.


DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python & Go

2009-11-12 Thread Patrick Sabin

Carl Banks wrote:

Well, it's hard to argue with not being like C++, but the lack of
inheritance is a doozie.


Well it has the concept of embedding, which seems to be similar to 
inheritance.


- Patrick
--
http://mail.python.org/mailman/listinfo/python-list


Re: (OT) Recommend FTP Client

2009-11-12 Thread David M. Besonen
On 11/12/2009 11:26 AM, Dave Angel wrote:

> Try  http://fireftp.mozdev.org/

in the past i found this to be buggy.  i'd recommend
something different.

what is your OS?


  -- david

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python simply not scaleable enough for google?

2009-11-12 Thread Rami Chowdhury
On Thu, 12 Nov 2009 12:44:00 -0800, Benjamin Kaplan  
 wrote:



Some one could even write an
interpreter for C++ if they wanted to.


Someone has (http://root.cern.ch/drupal/content/cint)!

--
Rami Chowdhury
"Never attribute to malice that which can be attributed to stupidity" --  
Hanlon's Razor

408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >