Re: [ANN] Lupa 0.6 - Lua in Python
Fabrizio Milo aka misto, 19.07.2010 15:41: This is very very interesting. Do you have any direct application of it ? I know games like World of Warcraft uses Lua as scripting language. Lua is widely used in the gaming industry, mainly for its size but also for its speed. Personally, I don't have any direct use for it, so this is mainly a fun project to see how well I can get it integrated and how fast I can get it. Stefan -- http://mail.python.org/mailman/listinfo/python-list
hasattr + __getattr__: I think this is Python bug
hi all, I have a class (FuncDesigner oofun) that has no attribute "size", but it is overloaded in __getattr__, so if someone invokes "myObject.size", it is generated (as another oofun) and connected to myObject as attribute. So, when I invoke in other code part "hasattr(myObject, 'size')", instead o returning True/False it goes to __getattr__ and starts constructor for another one oofun, that wasn't intended behaviour. Thus "hasattr(myObject, 'size')" always returns True. It prevents me of some bonuses and new features to be done in FuncDesigner. >>> 'size' in dir(b) False >>> hasattr(b,'size') True >>> 'size' in dir(b) True Could you fix it? -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On Tue, Jul 20, 2010 at 3:10 AM, dmitrey wrote: > hi all, > I have a class (FuncDesigner oofun) that has no attribute "size", but > it is overloaded in __getattr__, so if someone invokes > "myObject.size", it is generated (as another oofun) and connected to > myObject as attribute. > > So, when I invoke in other code part "hasattr(myObject, 'size')", > instead o returning True/False it goes to __getattr__ and starts > constructor for another one oofun, that wasn't intended behaviour. > Thus "hasattr(myObject, 'size')" always returns True. It prevents me > of some bonuses and new features to be done in FuncDesigner. > 'size' in dir(b) > False hasattr(b,'size') > True 'size' in dir(b) > True > > Could you fix it? There's probably some hackery you could do to check whether hasattr() is in the call stack, and then not dynamically create the attribute in __getattr__ if that's the case, but that's obviously quite kludgey. /Slightly/ less hackish: Replace hasattr() in the __builtin__ module with your own implementation that treats instances of FuncDesigner specially. Least ugly suggestion: Just don't use hasattr(); use your `x in dir(y)` trick instead. > Subject: [...] I think this is Python bug Nope, that's just how hasattr() works. See http://docs.python.org/library/functions.html#hasattr (emphasis mine): """ hasattr(object, name) The arguments are an object and a string. The result is True if the string is the name of one of the object’s attributes, False if not. (***This is implemented by calling getattr(object, name)*** and seeing whether it raises an exception or not.) """ I suppose you could argue for the addition of a __hasattr__ special method, but this seems like a really rare use case to justify adding such a method (not to mention the language change moratorium is still currently in effect). Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On Jul 20, 1:37 pm, Chris Rebert wrote:
> On Tue, Jul 20, 2010 at 3:10 AM, dmitrey wrote:
> > hi all,
> > I have a class (FuncDesigner oofun) that has no attribute "size", but
> > it is overloaded in __getattr__, so if someone invokes
> > "myObject.size", it is generated (as another oofun) and connected to
> > myObject as attribute.
>
> > So, when I invoke in other code part "hasattr(myObject, 'size')",
> > instead o returning True/False it goes to __getattr__ and starts
> > constructor for another one oofun, that wasn't intended behaviour.
> > Thus "hasattr(myObject, 'size')" always returns True. It prevents me
> > of some bonuses and new features to be done in FuncDesigner.
>
> 'size' in dir(b)
> > False
> hasattr(b,'size')
> > True
> 'size' in dir(b)
> > True
>
> > Could you fix it?
>
> There's probably some hackery you could do to check whether hasattr()
> is in the call stack, and then not dynamically create the attribute in
> __getattr__ if that's the case, but that's obviously quite kludgey.
It's too unreliable solution, hasattr may or may not appear in stack
wrt different cases
> /Slightly/ less hackish: Replace hasattr() in the __builtin__ module
> with your own implementation that treats instances of FuncDesigner
> specially.
It's too unreliable as well
> Least ugly suggestion: Just don't use hasattr(); use your `x in
> dir(y)` trick instead.
something in dir() consumes O(n) operations for lookup, while hasattr
or getattr() require O(log(n)). It matters for me, because it's inside
deeply nested mathematical computations FuncDesigner is made for.
>
> > Subject: [...] I think this is Python bug
>
> Nope, that's just how hasattr() works.
> Seehttp://docs.python.org/library/functions.html#hasattr(emphasis mine):
> """
> hasattr(object, name)
> The arguments are an object and a string. The result is True if
> the string is the name of one of the object’s attributes, False if
> not. (***This is implemented by calling getattr(object, name)*** and
> seeing whether it raises an exception or not.)
> """
Thus I believe this is very ugly implementation. Some code implemented
via "__getattr__" can start to execute arbitrary Python code, that is
certainly not desired behaviour "hasattr" was designed for (to perform
a check only), and it's too hard to reveal the situations, that makes
a potential holes for viruses etc.
Moreover, the implementation via "try getattr(object, name)" slowers
code evaluation, I wonder why it is implemented so.
>
> I suppose you could argue for the addition of a __hasattr__ special
> method, but this seems like a really rare use case to justify adding
> such a method (not to mention the language change moratorium is still
> currently in effect).
I think much more easier solution would be to implement an additional
argument to hasattr, e.g. hasattr(obj, field,
onlyLookupInExistingFields = {True/False}). I think by default it's
better set to True, now it behaves like False.
D.
--
http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
dmitrey wrote: > hi all, > I have a class (FuncDesigner oofun) that has no attribute "size", but > it is overloaded in __getattr__, so if someone invokes > "myObject.size", it is generated (as another oofun) and connected to > myObject as attribute. > > So, when I invoke in other code part "hasattr(myObject, 'size')", > instead o returning True/False it goes to __getattr__ and starts > constructor for another one oofun, that wasn't intended behaviour. > Thus "hasattr(myObject, 'size')" always returns True. It prevents me > of some bonuses and new features to be done in FuncDesigner. > 'size' in dir(b) > False hasattr(b,'size') > True 'size' in dir(b) > True > > Could you fix it? This isn't a bug, it is by design: try reading the documentation for 'hasattr' as that explains that it is implemented by calling getattr and seeing whether or not it throws an exception. If you want different behaviour you just need to define your own helper function that has the behaviour you desire. e.g. one that just looks in the object's dictionary so as to avoid returning true for properties or other such fancy attributes. -- Duncan Booth http://kupuguy.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
urllib2 test fails (2.7, linux)
Hi, running Python 2.7 test suite for urllib2 there is a test that doesn't pass. Do you have an idea about where the problem could be and how to solve it? Thanks, best regards. $ # ubuntu 8.04 $ pwd ~/sandbox/2.7/lib/python2.7/test $ python test_urllib2.py == ERROR: test_file (__main__.HandlerTests) -- Traceback (most recent call last): File "test_urllib2.py", line 711, in test_file h.file_open, Request(url)) File "/home/redt/sandbox/2.7/lib/python2.7/unittest/case.py", line 456, in assertRaises callableObj(*args, **kwargs) File "/home/redt/sandbox/2.7/lib/python2.7/urllib2.py", line 1269, in file_open return self.open_local_file(req) File "/home/redt/sandbox/2.7/lib/python2.7/urllib2.py", line 1301, in open_local_file (not port and socket.gethostbyname(host) in self.get_names()): gaierror: [Errno -5] No address associated with hostname Notes: $ hostname speedy $ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain speedy ::1 localhost speedy ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts Finally, I don't know if this matters but the tests have been executed offline (without an internet connection). -- http://mail.python.org/mailman/listinfo/python-list
Re: How to pass the shell in Python
On Tue, Jul 20, 2010 at 7:11 AM, Ranjith Kumar wrote: > Hi Folks, > Can anyone tell me how to run shell commands using python script. > > For simple work, i generally use os.system call. -- Regards, S.Selvam " I am because we are " -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
dmitrey wrote: hi all, I have a class (FuncDesigner oofun) that has no attribute "size", but it is overloaded in __getattr__, so if someone invokes "myObject.size", it is generated (as another oofun) and connected to myObject as attribute. So, when I invoke in other code part "hasattr(myObject, 'size')", instead o returning True/False it goes to __getattr__ and starts constructor for another one oofun, that wasn't intended behaviour. Thus "hasattr(myObject, 'size')" always returns True. It prevents me of some bonuses and new features to be done in FuncDesigner. 'size' in dir(b) False hasattr(b,'size') True 'size' in dir(b) True Could you fix it? Quite simple, when calling b.size, return the computed value but do not set it as attribute, the value will be computed on each b.size call, and hasattr w. If you don't want to compute it each time, cause there no reason for it to change, use a private dummy attribute to record the value instead of size (__size for instance) and return the value of __size when getting the value of size. class Foo(object): def __init__(self): self.__size = None def __getattr__(self, name): if name == "size": if self.__size is None: self.__size = 5 # compute the value return self.__size raise AttributeError(name) b = Foo() print 'size' in dir(b) print hasattr(b, 'size') print 'size' in dir(b) False True False JM -- http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
> In my web application (Django) I call a function for some request which > loads like 500 MB data from the database uses it to do some calculation and > stores the output in disk. I just wonder even after this request is served > the apache / python process is still shows using that 500 MB, why is it so? > Can't I release that memory? Are you talking about resident or virtual memory here? -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
Am 20.07.2010 12:10, schrieb dmitrey: > hi all, > I have a class (FuncDesigner oofun) that has no attribute "size", but > it is overloaded in __getattr__, so if someone invokes > "myObject.size", it is generated (as another oofun) and connected to > myObject as attribute. How about using a property instead of the __getattr__() hook? A property is a computed attribute that (among other things) plays much nicer with hasattr. -- http://mail.python.org/mailman/listinfo/python-list
Trying to redirect every urel request to test.py script with the visitors page request as url parameter.
Hello guys! This is my first post in this group!
I'am trying to create a python script to take a visitors page request
as url parameter, and the insert or update the counters database table
and the render the template(my tempalets are actually html files) that
has int hem special strign identifies format charactes so to replace
them with actual data form within my python script.
While the mod_rewwrite redirects okey
when i try to http://webville.gr apache by default tries to open
idnex.html file right?
but due to
Code:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^/?(.+) /cgi-bin/test.py?page=$1 [L,PT]
redirectes the url to test.py script and i expect it to give to my
pythoin script the initial url before the mod_rewrite as a URL
parameter.
while i tested and my script works ok(its a simple test cghi script
afgter all)
when i enable out mod_rewriute code within the htaccess file i get an
Internal Server Error
You can see if you try to http://webville.gr/index.html
whiel if i disbale mod_rewrite code my test.py script work proiducing
results.
That leads me to beleive that the intiial requests never passes as url
parameter.
my python script is this simple script:
Code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import cgitb
import os, sys, socket, datetime
cgitb.enable()
print ( "Content-type: text/html\n" )
# get some enviromental values
if os.environ.has_key('HTTP_REFERER'):
page = os.environ['HTTP_REFERER']
else:
page = "tipota"
host = socket.gethostbyaddr( os.environ['REMOTE_ADDR'] )[0]
date = datetime.datetime.now().strftime( '%y-%m-%d %H:%M:%S' )
print page, host, date
Can you help please?
Also i want the code to redirect to test.py only if the initial
request is a .html page not with every page.
Thanks again!
--
http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On 20 июл, 15:00, Jean-Michel Pichavant wrote: > dmitrey wrote: > > hi all, > > I have a class (FuncDesigner oofun) that has no attribute "size", but > > it is overloaded in __getattr__, so if someone invokes > > "myObject.size", it is generated (as another oofun) and connected to > > myObject as attribute. > > > So, when I invoke in other code part "hasattr(myObject, 'size')", > > instead o returning True/False it goes to __getattr__ and starts > > constructor for another one oofun, that wasn't intended behaviour. > > Thus "hasattr(myObject, 'size')" always returns True. It prevents me > > of some bonuses and new features to be done in FuncDesigner. > > 'size' in dir(b) > > > False > > hasattr(b,'size') > > > True > > 'size' in dir(b) > > > True > > > Could you fix it? > > Quite simple, when calling b.size, return the computed value but do not > set it as attribute, the value will be computed on each b.size call, and > hasattr w. If you don't want to compute it each time, cause there no > reason for it to change, use a private dummy attribute to record the > value instead of size (__size for instance) and return the value of > __size when getting the value of size. > > class Foo(object): > def __init__(self): > self.__size = None > > def __getattr__(self, name): > if name == "size": > if self.__size is None: > self.__size = 5 # compute the value > return self.__size > raise AttributeError(name) > > b = Foo() > print 'size' in dir(b) > print hasattr(b, 'size') > print 'size' in dir(b) > > False > True > False > > JM This doesn't stack with the following issue: sometimes user can write in code "myObject.size = (some integer value)" and then it will be involved in future calculations as ordinary fixed value; if user doesn't supply it, but myObject.size is involved in calculations, then the oofun is created to behave like similar numpy.array attribute. -- http://mail.python.org/mailman/listinfo/python-list
Kick off a delete command from python and not wait
I have a requirement to kick off a shell script from a python script without waiting for it to complete. I am not bothered about any return code from the script. What is the easiest way to do this. I have looked at popen but cannot see how to do it. -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
> e.g. one that just looks in the object's dictionary so as to avoid returning > true for properties or other such fancy attributes. So can anyone explain me how to look into object's dict? As I have wrote, "something in dir(...)" requires O(numOfFields) while I would like to use o(log(n)) >How about using a property instead of the __getattr__() hook? A property is a >computed attribute that (among other things) plays much nicer with hasattr. Could anyone provide an example of it to be implemented, taking into account that a user can manually set "myObject.size" to an integer value? -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On 2010-07-20, dmitrey wrote: > This doesn't stack with the following issue: sometimes user can > write in code "myObject.size = (some integer value)" and then > it will be involved in future calculations as ordinary fixed > value; if user doesn't supply it, but myObject.size is involved > in calculations, then the oofun is created to behave like > similar numpy.array attribute. Telling them, "Don't do that," is a good solution in Python. -- Neil Cerutti -- http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
Chris, Thanks for the link. On Mon, Jul 19, 2010 at 11:43 PM, Chris Rebert wrote: > On Mon, Jul 19, 2010 at 6:30 PM, Vishal Rana wrote: > > Hi, > > In my web application (Django) I call a function for some request which > > loads like 500 MB data from the database uses it to do some calculation > and > > stores the output in disk. I just wonder even after this request is > served > > the apache / python process is still shows using that 500 MB, why is it > so? > > Can't I release that memory? > > > http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm > > There are multiple layers of memory allocation involved. To avoid > thrashing+fragmentation and to improve efficiency, free memory is not > always immediately returned to the operating system. Example: If your > problem involved calling your 500MB function twice, free-ing after the > first call and then immediately re-allocating another 500MB of memory > for the second call would waste time. > > Cheers, > Chris > -- > http://blog.rebertia.com > -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On 20 июл, 18:39, Neil Cerutti wrote: > On 2010-07-20, dmitrey wrote: > > > This doesn't stack with the following issue: sometimes user can > > write in code "myObject.size = (some integer value)" and then > > it will be involved in future calculations as ordinary fixed > > value; if user doesn't supply it, but myObject.size is involved > > in calculations, then the oofun is created to behave like > > similar numpy.array attribute. > > Telling them, "Don't do that," is a good solution in Python. > > -- > Neil Cerutti But this is already documented feature, and it works as intended, so moving it into something like "myObject._size" will bring backward incompatibility and break all FuncDesigner user API and style, where no underlines are present, it will seem like a hack that it really is. Sometimes apriory knowing size value as fixed integer brings some code speedup, also, if a user supplies the value, a check for computed value wrt the provided size is performed each time. -- http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
Hi Christian, I am not sure which one is used in this case, I use htop to see the memory used by apache / python. Thanks Vishal Rana On Tue, Jul 20, 2010 at 5:31 AM, Christian Heimes wrote: > > In my web application (Django) I call a function for some request which > > loads like 500 MB data from the database uses it to do some calculation > and > > stores the output in disk. I just wonder even after this request is > served > > the apache / python process is still shows using that 500 MB, why is it > so? > > Can't I release that memory? > > Are you talking about resident or virtual memory here? > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
Thanks for your input. On Mon, Jul 19, 2010 at 7:23 PM, Scott McCarty wrote: > I had this exactly same problem with Peel and as far as I could find there > is no way reclaiming this memory unless you set max requests, which will > kill the Apache children processes after that number of requests. It's > normally something used for debugging, but can be used to reclaim ram. > > On the flip side, you could find your machine servers down and your child > processes will reuse that memory when they receive another request that uses > a huge amount of ram. It really depends on how often you are doing that > kind of processing, how you want to tune apache. > > Scott M > > On Jul 19, 2010 9:31 PM, "Vishal Rana" wrote: > > Hi, > > > > In my web application (Django) I call a function for some request which > > loads like 500 MB data from the database uses it to do some calculation > and > > stores the output in disk. I just wonder even after this request is > served > > the apache / python process is still shows using that 500 MB, why is it > so? > > Can't I release that memory? > > > > Thanks > > Vishal Rana > -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
dmitrey wrote: On 20 июл, 15:00, Jean-Michel Pichavant wrote: dmitrey wrote: hi all, I have a class (FuncDesigner oofun) that has no attribute "size", but it is overloaded in __getattr__, so if someone invokes "myObject.size", it is generated (as another oofun) and connected to myObject as attribute. So, when I invoke in other code part "hasattr(myObject, 'size')", instead o returning True/False it goes to __getattr__ and starts constructor for another one oofun, that wasn't intended behaviour. Thus "hasattr(myObject, 'size')" always returns True. It prevents me of some bonuses and new features to be done in FuncDesigner. 'size' in dir(b) False hasattr(b,'size') True 'size' in dir(b) True Could you fix it? Quite simple, when calling b.size, return the computed value but do not set it as attribute, the value will be computed on each b.size call, and hasattr w. If you don't want to compute it each time, cause there no reason for it to change, use a private dummy attribute to record the value instead of size (__size for instance) and return the value of __size when getting the value of size. class Foo(object): def __init__(self): self.__size = None def __getattr__(self, name): if name == "size": if self.__size is None: self.__size = 5 # compute the value return self.__size raise AttributeError(name) b = Foo() print 'size' in dir(b) print hasattr(b, 'size') print 'size' in dir(b) False True False JM This doesn't stack with the following issue: sometimes user can write in code "myObject.size = (some integer value)" and then it will be involved in future calculations as ordinary fixed value; if user doesn't supply it, but myObject.size is involved in calculations, then the oofun is created to behave like similar numpy.array attribute. Here are some solutions in my humble order of preference: 1/ ask the user to always fill the size field 2/ ask the user to never fill the size filed (you can override __setattr__ to make sure...) 3/ override __setattr__ to set __size instead of size JM -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
dmitrey wrote: >> e.g. one that just looks in the object's dictionary so as to avoid >> returning true for properties or other such fancy attributes. > > So can anyone explain me how to look into object's dict? As I have > wrote, "something in dir(...)" requires O(numOfFields) while I would > like to use o(log(n)) O(1) might be even better. Try this: def dmitry_hasattr(obj, name): """Checks for existence of an attribute directly in an object's dictionary. Doesn't see all attributes but doesn't have side effects.""" d = getattr(obj, '__dict__') if d is not None: return name in d return False -- Duncan Booth http://kupuguy.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
Am 20.07.2010 17:50, schrieb Vishal Rana: > Hi Christian, > > I am not sure which one is used in this case, I use htop to see the memory > used by apache / python. In its default configuration htop reports three different types of memory usage: virt, res and shr (virtual, resident and shared memory). Which of them stays at 500 MB? Christian -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On 7/20/10 6:59 AM, dmitrey wrote: On Jul 20, 1:37 pm, Chris Rebert wrote: Least ugly suggestion: Just don't use hasattr(); use your `x in dir(y)` trick instead. something in dir() consumes O(n) operations for lookup, while hasattr or getattr() require O(log(n)). It matters for me, because it's inside deeply nested mathematical computations FuncDesigner is made for. I'm not sure where you are getting log(n) from, but regardless, if you have so many attributes that the O() matters more than the constant, you have more problems than this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On 7/20/10 11:39 AM, dmitrey wrote:
e.g. one that just looks in the object's dictionary so as to avoid returning
true for properties or other such fancy attributes.
So can anyone explain me how to look into object's dict? As I have
wrote, "something in dir(...)" requires O(numOfFields) while I would
like to use o(log(n))
('size' in obj.__dict__) is O(1) with a pretty low constant.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
--
http://mail.python.org/mailman/listinfo/python-list
Re: hasattr + __getattr__: I think this is Python bug
On Tue, Jul 20, 2010 at 9:39 AM, dmitrey wrote: >>How about using a property instead of the __getattr__() hook? A property is a >>computed attribute that (among other things) plays much nicer with hasattr. > > Could anyone provide an example of it to be implemented, taking into > account that a user can manually set "myObject.size" to an integer > value? The following will work in Python 2.6+: class MyClass(object): @property def size(self): if not hasattr(self, '_size'): self._size = self._compute_size() return self._size @size.setter def size(self, value): self._size = value To support earlier versions of Python, you would write it like so: class MyClass(object): def _get_size(self): if not hasattr(self, '_size'): self._size = self._compute_size() return self._size def _set_size(self, value): self._size = value size = property(_get_size, _set_size) -- http://mail.python.org/mailman/listinfo/python-list
Compile python executable only for package deployment on Linux
Hi,
I have created a simple tool(python script) that creates a self
sufficient package ready for deployment. Current implementation is
based on shell scripting to set environment for the app and finally
execute "python main.py".
I am planning to convert "main.py" into an executable. The plan is to
rip the unnecessary code from source code that produce python
executable such as command line arguments etc, use "main.py" as python
string (hardcoded inside executable source) and execute it using
"exec" or similar methods and finally creates executable.
Am I right here? Is this is the correct approach?
For example a simple script:
import os
import math
print math.sin(23.0)
print os.getenv("PATH")
Once I'll convert above script as i have mentioned above in an
executable say: "myapp", executing "myapp" will print messages on
console.
Cheers
Prashant
--
http://mail.python.org/mailman/listinfo/python-list
Re: How is memory managed in python?
Christian, It stays in RES and VIRT as well. Thanks Vishal Rana On Tue, Jul 20, 2010 at 8:53 AM, Christian Heimes wrote: > Am 20.07.2010 17:50, schrieb Vishal Rana: > > Hi Christian, > > > > I am not sure which one is used in this case, I use htop to see the > memory > > used by apache / python. > > In its default configuration htop reports three different types of > memory usage: virt, res and shr (virtual, resident and shared memory). > Which of them stays at 500 MB? > > Christian > -- http://mail.python.org/mailman/listinfo/python-list
Re: Compile python executable only for package deployment on Linux
King, 20.07.2010 18:45:
I have created a simple tool(python script) that creates a self
sufficient package ready for deployment. Current implementation is
based on shell scripting to set environment for the app and finally
execute "python main.py".
I am planning to convert "main.py" into an executable. The plan is to
rip the unnecessary code from source code that produce python
executable such as command line arguments etc, use "main.py" as python
string (hardcoded inside executable source) and execute it using
"exec" or similar methods and finally creates executable.
Am I right here? Is this is the correct approach?
From what you write here, I'm not exactly sure what you want to achieve,
but...
For example a simple script:
import os
import math
print math.sin(23.0)
print os.getenv("PATH")
Once I'll convert above script as i have mentioned above in an
executable say: "myapp", executing "myapp" will print messages on
console.
Assuming that Python is installed on the machine and readily runnable from
the PATH, this will make the script executable:
#!/usr/bin/env python
import os
import math
...
Note that the file must have the executable bit set.
Search for "shebang", which is the spelled-out name for the "#!" special
file prefix.
Stefan
--
http://mail.python.org/mailman/listinfo/python-list
Re: How to pass the shell in Python
sub = subprocess.Popen("shell command", shell=True)
If you have to wait the shell finishes its commands and then continue the
next Python code. You can add another line:
sub.wait()
On Tue, Jul 20, 2010 at 7:57 AM, S.Selvam wrote:
>
>
> On Tue, Jul 20, 2010 at 7:11 AM, Ranjith Kumar wrote:
>
>> Hi Folks,
>> Can anyone tell me how to run shell commands using python script.
>>
>
>>
>
> For simple work, i generally use os.system call.
>
> --
> Regards,
> S.Selvam
>
>" I am because we are "
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
--
http://mail.python.org/mailman/listinfo/python-list
Re: Kick off a delete command from python and not wait
On Tue, Jul 20, 2010 at 8:33 AM, loial wrote: > I have a requirement to kick off a shell script from a python script > without waiting for it to complete. I am not bothered about any return > code from the script. > > What is the easiest way to do this. I have looked at popen but cannot > see how to do it. Use the `subprocess` module. import subprocess proc = subprocess.Popen(["shell_script.sh", "arg1", "arg2"], stdout=subprocess.PIPE, stderr=subprcoess.PIPE) # lots of code here doing other stuff proc.wait() I believe you need to /eventually/ call .wait() as shown to avoid the child becoming a zombie process. Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Compile python executable only for package deployment on Linux
Hi Stefan,
Well, the idea is similar to package tools like pyinstaller or
cx_freeze. There approach is slightly different then what I intend to
do here.
You have to pass the name of the script to python executable("python
main.py") in order to execute it. What I mean here is to create python
executable using python executable source code only(not compilation of
entire python using source code) and insert "main.py" contents in
source code(hardcoded) and produce a new executable.
When you run that executable, it should run the "main.py", which was
hard coded while building using fixed string or any other method. The
executable is actually a modified version of python executable. It
won't take any argument because we'll rip that piece of code out.
Hope this will clear out.
Cheers
prashant
--
http://mail.python.org/mailman/listinfo/python-list
Re: urllib2 test fails (2.7, linux)
On 7/20/2010 7:42 AM, guandalino wrote: Hi, running Python 2.7 test suite for urllib2 there is a test that doesn't pass. Do you have an idea about where the problem could be and how to solve it? Thanks, best regards. $ # ubuntu 8.04 $ pwd ~/sandbox/2.7/lib/python2.7/test $ python test_urllib2.py == ERROR: test_file (__main__.HandlerTests) -- Traceback (most recent call last): File "test_urllib2.py", line 711, in test_file h.file_open, Request(url)) Look there to find the complete statement. For 3.1, it would be self.assertRaises(urllib.error.URLError, h.file_open, Request(url)) (urllib2 is now urllib.request) You could insert a print to find what url caused a problem. File "/home/redt/sandbox/2.7/lib/python2.7/unittest/case.py", line This puzzles me. In 3.1, unittest is a module, unittest.py, not a package containing modules like 'case.py'. I though it was the same for 2.7 456, in assertRaises callableObj(*args, **kwargs) This is in unittest.py. It says that this test case *should* fail, but with a different error (urllib.error.URLError) than the one you got (gaierror). File "/home/redt/sandbox/2.7/lib/python2.7/urllib2.py", line 1269, in file_open return self.open_local_file(req) File "/home/redt/sandbox/2.7/lib/python2.7/urllib2.py", line 1301, in open_local_file (not port and socket.gethostbyname(host) in self.get_names()): gaierror: [Errno -5] No address associated with hostname gaierror comes from socket.gethostbyname Finally, I don't know if this matters but the tests have been executed offline (without an internet connection). Since error is in open_local_file, I would think not. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
convert time to UTC seconds since epoch
Hi, list How with python standard library to convert string like '-MM-DD mm:HH:SS ZONE' to seconds since epoch in UTC? ZONE may be literal time zone or given in explicit way like +0100. -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On Jul 20, 2010, at 12:26 , Alexander wrote: > Hi, list > > How with python standard library to convert string like '-MM-DD > mm:HH:SS ZONE' to seconds since epoch in UTC? ZONE may be literal time > zone or given in explicit way like +0100. If you have a sufficiently recent version of Python, have you considered time.strptime: http://docs.python.org/library/time.html#time.strptime ? HTH, Rami - Rami Chowdhury "Never assume malice when stupidity will suffice." -- Hanlon's Razor 408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD) -- http://mail.python.org/mailman/listinfo/python-list
linux console command line history
Hi to all, I 'm writing a linux console app with sockets. It's basically a client app that fires commands in a server. For example: $log user 55 $sessions list $server list etc. What i want is, after entering some commands, to press the up arrow key and see the previous commands that i have executed. Any hints? Any examples? Antonis -- http://mail.python.org/mailman/listinfo/python-list
Re: Kick off a delete command from python and not wait
On Tue, 20 Jul 2010 10:32:12 -0700, Chris Rebert wrote: > I believe you need to /eventually/ call .wait() as shown to avoid the > child becoming a zombie process. Alternatively, you can call .poll() periodically. This is similar to .wait() insofar as it will "reap" the process if it has terminated, but unlike .wait(), .poll() won't block if the process is still running. On Unix, you can use os.fork(), have the child execute the command in the background, and have the parent wait for the child with os.wait(). The child will terminate as soon as it has spawned the grandchild, and the grandchild will be reaped automatically upon termination (so you can forget about it). -- http://mail.python.org/mailman/listinfo/python-list
Re: linux console command line history
On Tue, Jul 20, 2010 at 2:38 PM, [email protected] wrote: > Hi to all, > I 'm writing a linux console app with sockets. It's basically a client > app that fires commands in a server. > For example: > $log user 55 > $sessions list > $server list etc. > What i want is, after entering some commands, to press the up arrow > key and see the previous commands that i have executed. > Any hints? Any examples? > > Antonis > -- Look at the readline module. http://docs.python.org/library/readline.html -- http://mail.python.org/mailman/listinfo/python-list
How to treat the first or last item differently
A Python newcomer asked this question on python-ideas list.
I am answering here for the benefit of others.
Example: building a string res with commas separating substrings s from
some sequence. Either the first item added must be s versus ', '+s or
the last must be s versus s+', '.
For building strings, of course, the join method solves the problem,
adding n-1 separators between n items.
items = ['first', 'second', 'last']
print(', '.join(items))
#first, second, last
DISCLAIMER: All of the following code chunks produce the same result,
for the purpose of illustration, but they are NOT the way to build a
string result with dependable efficiency.
To treat the first item differently, either peel it off first ...
it = iter(items)
try: # protect against empty it, simplify if know not
res = next(it)
for s in it: res += ', ' + s
except StopIteration:
res = ''
print(res)
# first, second, last
or use a flag.
res = ''
First = True
for s in items:
if First:
res=s
First=False
else:
res += ', ' + s
print(res)
# first, second, last
There is no way, in general, to know whether next(it) will yield another
item after the current one. That suggests that the way to know whether
an item is the last or not is try to get another first, before
processing the current item.
One approach is to look ahead ...
it = iter(items)
res = ''
try:
cur = next(it)
for nxt in it:
# cur is not last
res += cur + ', '
cur = nxt
else:
# cur is last item
res += cur
except StopIteration:
pass
print(res)
# first, second, last
Another is to add a unique sentinel to the sequence.
Last = object()
items.append(Last) # so not empty, so no protection against that needed
it = iter(items)
res = ''
cur = next(it)
for nxt in it:
if nxt is not Last:
res += cur + ', '
cur = nxt
else:
res += cur
print(res)
# first, second, last
It makes sense to separate last detection from item processing so last
detection can be put in a library module and reused.
def o_last(iterable):
" Yield item,islast pairs"
it = iter(iterable)
cur = next(it)
for nxt in it:
yield cur,False
cur = nxt
else:
yield cur,True
def comma_join(strings):
res = ''
for s,last in o_last(strings):
res += s
if not last: res += ', '
return res
print(comma_join(['first', 'second', 'last']))
print(comma_join(['first', 'last']))
print(comma_join(['last']))
print(comma_join([]))
# first, second, last
# first, last
# last
#
--
Terry Jan Reedy
--
http://mail.python.org/mailman/listinfo/python-list
Pickle MemoryError - any ideas?
I have created a class that contains a list of files (contents,
binary) - so it uses a LOT of memory.
When I first pickle.dump the list it creates a 1.9GByte file on the
disk. I can load the contents back again, but when I attempt to dump
it again (with or without additions), I get the following:
Traceback (most recent call last):
File "", line 1, in
File "c:\Python26\Lib\pickle.py", line 1362, in dump
Pickler(file, protocol).dump(obj)
File "c:\Python26\Lib\pickle.py", line 224, in dump
self.save(obj)
File "c:\Python26\Lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "c:\Python26\Lib\pickle.py", line 600, in save_list
self._batch_appends(iter(obj))
File "c:\Python26\Lib\pickle.py", line 615, in _batch_appends
save(x)
File "c:\Python26\Lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "c:\Python26\Lib\pickle.py", line 488, in save_string
self.write(STRING + repr(obj) + '\n')
MemoryError
I get this error either attempting to dump the entire list or dumping
it in "segments" i.e. the list is 2229 elements long, so from the
command line I attempted using pickle to dump individual parts of the
list into into files i.e. every 500 elements were saved to their own
file - but I still get the same error.
I used the following sequence when attempting to dump the list in
segments - X and Y were 500 element indexes apart, the sequence fails
on [1000:1500]:
f = open('archive-1', 'wb', 2)
pickle.dump(mylist[X:Y], f)
f.close()
I am assuming that available memory has been exhausted, so I tried
"waiting" between dumps in the hopes that garbage collection might
free some memory - but that doesn't help at all.
In summary:
1. The list gets originally created from various sources
2. the list can be dumped successfully
3. the program restarts and successfully loads the list
4. the list can not be (re) dumped without getting a MemoryError
This seems like a bug in pickle?
Any ideas (other than the obvious - don't save all of these files
contents into a list! Although that is the only "answer" I can see at
the moment :-)).
Thanks
Peter
--
http://mail.python.org/mailman/listinfo/python-list
Exposing buffer interface for non-extension types?
Is there any way to expose the PEP 3118 buffer interface for objects that aren't extension types? Currently, I can expose the NumPy array interface (using either __array_interface__ or __array_struct__) for any class, extension or otherwise. But I can't find any reference to python-side interfacing for PEP 3118. SWIG makes an extension module for your wrapped code, but not extension *types*, so the classes it produces are pure-python with methods added in from the extension module. The NumPy array interface works fine for now (especially since NumPy is the only thing I need to consume these objects), but the documentation claims that it's being deprecated in favor of PEP 3118, so I thought it might be relevant to bring this up. -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
Ken Watford, 21.07.2010 00:09: Is there any way to expose the PEP 3118 buffer interface for objects that aren't extension types? Given that it's a pure C-level interface, I don't think there would be much use for that. Currently, I can expose the NumPy array interface (using either __array_interface__ or __array_struct__) for any class, extension or otherwise. But I can't find any reference to python-side interfacing for PEP 3118. SWIG makes an extension module for your wrapped code, but not extension *types*, so the classes it produces are pure-python with methods added in from the extension module. Try using Cython instead, it has native support for the buffer protocol. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On 21.07.2010 00:46, Rami Chowdhury wrote: > On Jul 20, 2010, at 12:26 , Alexander wrote: > >> Hi, list >> >> How with python standard library to convert string like '-MM-DD >> mm:HH:SS ZONE' to seconds since epoch in UTC? ZONE may be literal time >> zone or given in explicit way like +0100. > If you have a sufficiently recent version of Python, have you considered > time.strptime: http://docs.python.org/library/time.html#time.strptime ? > Yes. May be I don't undertand something. but it seems strptime doesn't work with timezones at all. Only understands localzone and dates w/o zones. -- http://mail.python.org/mailman/listinfo/python-list
Re: Pickle MemoryError - any ideas?
On 7/20/2010 3:01 PM Peter said... I have created a class that contains a list of files (contents, binary) - so it uses a LOT of memory. Any ideas? Switch to 64 bit Windows & Python? Emile -- http://mail.python.org/mailman/listinfo/python-list
Re: linux console command line history
On Jul 21, 12:47 am, Benjamin Kaplan wrote: > On Tue, Jul 20, 2010 at 2:38 PM, [email protected] wrote: > > Hi to all, > > I 'm writing a linux console app with sockets. It's basically a client > > app that fires commands in a server. > > For example: > > $log user 55 > > $sessions list > > $server list etc. > > What i want is, after entering some commands, to press the up arrow > > key and see the previous commands that i have executed. > > Any hints? Any examples? > > > Antonis > > -- > > Look at the readline module.http://docs.python.org/library/readline.html ok that's fine, thanks. I have also find a very helpful example in PyMoTW http://www.doughellmann.com/PyMOTW/readline/index.html(Thanks Doug!!!). But if i want to run this in it's own separate thread, how could i do that? there is an # Prompt the user for text input_loop() which is blocking? Antonis K. -- http://mail.python.org/mailman/listinfo/python-list
Re: Pickle MemoryError - any ideas?
On Jul 20, 3:01 pm, Peter wrote:
> I have created a class that contains a list of files (contents,
> binary) - so it uses a LOT of memory.
>
> When I first pickle.dump the list it creates a 1.9GByte file on the
> disk. I can load the contents back again, but when I attempt to dump
> it again (with or without additions), I get the following:
>
> Traceback (most recent call last):
> File "", line 1, in
> File "c:\Python26\Lib\pickle.py", line 1362, in dump
> Pickler(file, protocol).dump(obj)
> File "c:\Python26\Lib\pickle.py", line 224, in dump
> self.save(obj)
> File "c:\Python26\Lib\pickle.py", line 286, in save
> f(self, obj) # Call unbound method with explicit self
> File "c:\Python26\Lib\pickle.py", line 600, in save_list
> self._batch_appends(iter(obj))
> File "c:\Python26\Lib\pickle.py", line 615, in _batch_appends
> save(x)
> File "c:\Python26\Lib\pickle.py", line 286, in save
> f(self, obj) # Call unbound method with explicit self
> File "c:\Python26\Lib\pickle.py", line 488, in save_string
> self.write(STRING + repr(obj) + '\n')
> MemoryError
(Aside) Wow, pickle concatenates strings like this?
> I get this error either attempting to dump the entire list or dumping
> it in "segments" i.e. the list is 2229 elements long, so from the
> command line I attempted using pickle to dump individual parts of the
> list into into files i.e. every 500 elements were saved to their own
> file - but I still get the same error.
>
> I used the following sequence when attempting to dump the list in
> segments - X and Y were 500 element indexes apart, the sequence fails
> on [1000:1500]:
>
> f = open('archive-1', 'wb', 2)
> pickle.dump(mylist[X:Y], f)
> f.close()
First thing to do is try cPickle module instead of pickle.
> I am assuming that available memory has been exhausted, so I tried
> "waiting" between dumps in the hopes that garbage collection might
> free some memory - but that doesn't help at all.
Waiting won't trigger a garbage collection. Well first of all, it's
not garbage collection but cycle collection (objects not part of
refernce cycles are collected immediately after they're destroyed, at
least they are in CPyhton), and given that your items are all binary
data, I doubt there are many reference cycles in your data.
Anyway, cycle collection is triggered when object creation/deletion
counts meet certain criteria (which won't happen if you are waiting),
but you could call gc.collect() to force a cycle collection.
> In summary:
>
> 1. The list gets originally created from various sources
> 2. the list can be dumped successfully
> 3. the program restarts and successfully loads the list
> 4. the list can not be (re) dumped without getting a MemoryError
>
> This seems like a bug in pickle?
No
> Any ideas (other than the obvious - don't save all of these files
> contents into a list! Although that is the only "answer" I can see at
> the moment :-)).
You should at least consider if one of the dbm-style databases (dbm,
gdbm, or dbhash) meets your needs.
Carl Banks
--
http://mail.python.org/mailman/listinfo/python-list
Set Python Path for Idle Mac 10.5
Hi, New to programming and after doing some research I've chosed to work with Python. One thing that's bothering me is that I would like to set up a specific folder in my Documents folder to hold my modules. How do I go about doing this? I've found the way to change it for each IDLE session but I'd like to make it check the folder automatically. I've done a bunch of searches and have come up with nothing helpful. Thanks. -Ryan -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On Jul 20, 3:09 pm, Ken Watford wrote: > Is there any way to expose the PEP 3118 buffer interface for objects > that aren't extension types? > > Currently, I can expose the NumPy array interface (using either > __array_interface__ or __array_struct__) for any class, extension or > otherwise. But I can't find any reference to python-side interfacing > for PEP 3118. SWIG makes an extension module for your wrapped code, > but not extension *types*, so the classes it produces are pure-python > with methods added in from the extension module. > > The NumPy array interface works fine for now (especially since NumPy > is the only thing I need to consume these objects), but the > documentation claims that it's being deprecated in favor of PEP 3118, > so I thought it might be relevant to bring this up. Can you tell let us know what you want to use it for? We could offer better help. Numpy is generally how I get at buffers in Python 2.x. For instance if I have an object m that supports buffer protocol (m could a string, mmap object, Python array, etc.), then the following will create an array view of the same buffer: numpy.ndarray((10,10),type=numpy.float32,buffer=m) As far as I know this procedure won't be too different under PEP 3118; if anything it's simplified in Python 3 since it can discover type and shape information itself. (You'll have to check with the numpy people on that.) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On Tue, Jul 20, 2010 at 6:58 PM, Stefan Behnel wrote: > Ken Watford, 21.07.2010 00:09: >> >> Is there any way to expose the PEP 3118 buffer interface for objects >> that aren't extension types? > > Given that it's a pure C-level interface, I don't think there would be much > use for that. Perhaps, but *why* is it only a pure C-level interface? It's based on/inspired by the array interface, which was not a pure C-level interface. Did they simply neglect to provide the functionality due to lack of obvious use cases, or did they consciously decide to drop that functionality? >> Currently, I can expose the NumPy array interface (using either >> __array_interface__ or __array_struct__) for any class, extension or >> otherwise. But I can't find any reference to python-side interfacing >> for PEP 3118. SWIG makes an extension module for your wrapped code, >> but not extension *types*, so the classes it produces are pure-python >> with methods added in from the extension module. > > Try using Cython instead, it has native support for the buffer protocol. I've used Cython before, and I generally like it. But its purpose is slightly different than SWIG's, and does not particularly meet my current project's needs. -- http://mail.python.org/mailman/listinfo/python-list
Re: Pickle MemoryError - any ideas?
On 7/20/2010 3:01 PM, Peter wrote: I have created a class that contains a list of files (contents, binary) - so it uses a LOT of memory. When I first pickle.dump the list it creates a 1.9GByte file on the disk. I can load the contents back again, but when I attempt to dump it again (with or without additions), I get the following: Be sure to destroy the pickle object when you're done with it. Don't reuse it. Pickle has a cache - it saves every object pickled, and if the same object shows up more than once, the later instances are represented as a cache ID. This can fill memory unnecessarily. See "http://groups.google.com/group/comp.lang.python/browse_thread/thread/3f8b999c25af263a"; John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On Tue, Jul 20, 2010 at 4:51 PM, Alexander wrote: > On 21.07.2010 00:46, Rami Chowdhury wrote: >> On Jul 20, 2010, at 12:26 , Alexander wrote: >> >>> Hi, list >>> >>> How with python standard library to convert string like '-MM-DD >>> mm:HH:SS ZONE' to seconds since epoch in UTC? ZONE may be literal time >>> zone or given in explicit way like +0100. >> If you have a sufficiently recent version of Python, have you considered >> time.strptime: http://docs.python.org/library/time.html#time.strptime ? >> > Yes. May be I don't undertand something. but it seems strptime doesn't > work with timezones at all. Only understands localzone and dates w/o zones. Have you looked at the dateutil.parser module? It's not part of the standard library but probably does what you need. http://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2 Cheers, Ian -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On Tue, Jul 20, 2010 at 8:28 PM, Carl Banks wrote: > On Jul 20, 3:09 pm, Ken Watford wrote: >> Is there any way to expose the PEP 3118 buffer interface for objects >> that aren't extension types? >> >> Currently, I can expose the NumPy array interface (using either >> __array_interface__ or __array_struct__) for any class, extension or >> otherwise. But I can't find any reference to python-side interfacing >> for PEP 3118. SWIG makes an extension module for your wrapped code, >> but not extension *types*, so the classes it produces are pure-python >> with methods added in from the extension module. >> >> The NumPy array interface works fine for now (especially since NumPy >> is the only thing I need to consume these objects), but the >> documentation claims that it's being deprecated in favor of PEP 3118, >> so I thought it might be relevant to bring this up. > > Can you tell let us know what you want to use it for? We could offer > better help. > > Numpy is generally how I get at buffers in Python 2.x. For instance > if I have an object m that supports buffer protocol (m could a string, > mmap object, Python array, etc.), then the following will create an > array view of the same buffer: > > numpy.ndarray((10,10),type=numpy.float32,buffer=m) > > As far as I know this procedure won't be too different under PEP 3118; > if anything it's simplified in Python 3 since it can discover type and > shape information itself. (You'll have to check with the numpy people > on that.) I'm not having trouble using buffers, I'm having trouble providing them. As a part of SWIG-wrapping a larger C++ project, I'm producing some wrappings for Blitz++ arrays. I can extract the shape and stride information from the array object to fill either NumPy's or PEP 3118's appropriate structure. In the case of NumPy, I can easily arrange for the necessary interface on the proxy object to be fulfilled, because NumPy doesn't care what kind of object it's attached to. But the PEP 3118 interface can only be provided by C extension types. One possibility I've considered is injecting a small extension type into the wrapper that provides PEP 3118 by reading the NumPy array interface info off of the object, and then inject it into all appropriate SWIG-generated proxy classes as an additional base class. This isn't a big deal for me - the array interface works just fine, and probably will for longer than I'll be working on this project - but it just struck me as strange that some of my existing array-interface-enabled classes can't be trivially ported to PEP 3118 because they're defined in pure Python modules rather than extension modules. -- http://mail.python.org/mailman/listinfo/python-list
Pydev 1.6.0 Released
Hi All, Pydev 1.6.0 has been released Details on Pydev: http://pydev.org Details on its development: http://pydev.blogspot.com Release Highlights: --- * Debugger o Code-completion added to the debug console o Entries in the debug console are evaluated on a line-by-line basis (previously an empty line was needed) o Threads started with thread.start_new_thread are now properly traced in the debugger o Added method -- pydevd.set_pm_excepthook() -- which clients may use to debug uncaught exceptions o Printing exception when unable to connect in the debugger * General o Interactive console may be created using the eclipse vm (which may be used for experimenting with Eclipse) o Apply patch working (Fixed NPE when opening compare editor in a dialog) o Added compatibility to Aptana Studio 3 (Beta) -- release from July 12th What is PyDev? --- PyDev is a plugin that enables users to use Eclipse for Python, Jython and IronPython development -- making Eclipse a first class Python IDE -- It comes with many goodies such as code completion, syntax highlighting, syntax analysis, refactor, debug and many others. Cheers, -- Fabio Zadrozny -- Software Developer Aptana http://aptana.com/ Pydev - Python Development Environment for Eclipse http://pydev.org http://pydev.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On 21 Jul, 02:38, Ken Watford wrote: > Perhaps, but *why* is it only a pure C-level interface? It is exposed to Python as memoryview. If memoryview is not sufficient, we can use ctypes.pythonapi to read the C struct. -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On 7/20/10 8:38 PM, Ken Watford wrote: On Tue, Jul 20, 2010 at 6:58 PM, Stefan Behnel wrote: Ken Watford, 21.07.2010 00:09: Is there any way to expose the PEP 3118 buffer interface for objects that aren't extension types? Given that it's a pure C-level interface, I don't think there would be much use for that. Perhaps, but *why* is it only a pure C-level interface? It's based on/inspired by the array interface, which was not a pure C-level interface. Did they simply neglect to provide the functionality due to lack of obvious use cases, or did they consciously decide to drop that functionality? Lack of obvious use cases. The primary use case is for C extensions to communicate with each other. SWIG is the odd man out in that it does not create true extension types. While the functionality of the PEP derives from numpy's interface, it's inclusion in Python was largely seen as the extension of the older buffer interface which is also a pure C interface. The Python-level __array_interface__ numpy API is not and will not be deprecated despite some outdated language in the documentation. Please continue to use it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On 2010-07-20, Rami Chowdhury wrote: > If you have a sufficiently recent version of Python, have you >considered time.strptime: >http://docs.python.org/library/time.html#time.strptime ? Given the documentation talks about "double leap seconds" which don't exist, why should this code be trusted? -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On Tue, Jul 20, 2010 at 9:26 PM, Robert Kern wrote: > On 7/20/10 8:38 PM, Ken Watford wrote: >> >> On Tue, Jul 20, 2010 at 6:58 PM, Stefan Behnel >> wrote: >>> >>> Ken Watford, 21.07.2010 00:09: Is there any way to expose the PEP 3118 buffer interface for objects that aren't extension types? >>> >>> Given that it's a pure C-level interface, I don't think there would be >>> much >>> use for that. >> >> Perhaps, but *why* is it only a pure C-level interface? It's based >> on/inspired by the array interface, which was not a pure C-level >> interface. Did they simply neglect to provide the functionality due to >> lack of obvious use cases, or did they consciously decide to drop thaThat >> functionality? > > Lack of obvious use cases. The primary use case is for C extensions to > communicate with each other. SWIG is the odd man out in that it does not > create true extension types. While the functionality of the PEP derives from > numpy's interface, it's inclusion in Python was largely seen as the > extension of the older buffer interface which is also a pure C interface. > > The Python-level __array_interface__ numpy API is not and will not be > deprecated despite some outdated language in the documentation. Please > continue to use it. Thanks, that's good to know. (Someone should probably do something about the big red box that says the opposite in the current docs). I assume the same is true about __array_struct__? Since it *is* an extension despite the non-extension types, filling in a structure is a little more convenient than building the dictionary. -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On Tue, Jul 20, 2010 at 6:31 PM, Greg Hennessy wrote: > On 2010-07-20, Rami Chowdhury wrote: >> If you have a sufficiently recent version of Python, have you >>considered time.strptime: >>http://docs.python.org/library/time.html#time.strptime ? > > Given the documentation talks about "double leap seconds" which don't > exist, why should this code be trusted? Because they exist(ed) in POSIX. See http://www.ucolick.org/~sla/leapsecs/onlinebib.html : """ The standards committees decided that POSIX time should be UTC, but the early POSIX standards inexplicably incorporated a concept which never existed in UTC -- the ``double leap second''. This mistake reportedly existed in the POSIX standard from 1989, and it persisted in POSIX until at least 1997. """ Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On 2010-07-21, Chris Rebert wrote: > On Tue, Jul 20, 2010 at 6:31 PM, Greg Hennessy wrote: >> Given the documentation talks about "double leap seconds" which don't >> exist, why should this code be trusted? > > Because they exist(ed) in POSIX. Why should POSIX time calculations involving leap seconds be trusted? This is a pet peeve of mine, when will someone actually implement leap seconds correctly? And as a professional astronomer myself, I'm well aware of Steve Allen's website. :) -- http://mail.python.org/mailman/listinfo/python-list
Re: convert time to UTC seconds since epoch
On Tue, Jul 20, 2010 at 6:48 PM, Greg Hennessy wrote: > On 2010-07-21, Chris Rebert wrote: >> On Tue, Jul 20, 2010 at 6:31 PM, Greg Hennessy wrote: >>> Given the documentation talks about "double leap seconds" which don't >>> exist, why should this code be trusted? >> >> Because they exist(ed) in POSIX. > > Why should POSIX time calculations involving leap seconds be trusted? I'm not saying they necessarily should, but they're standardized and the `time` module is based on POSIX/Unix-ish assumptions; not following POSIX would be inconsistent and problematic. Breaking standards is bad, M'Kay? > This is a pet peeve of mine, when will someone actually implement leap > seconds correctly? Well, at least there's the possibility they will be eliminated in the future anyway, which would make their implementation a non-issue. :-) Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On 7/20/10 9:39 PM, Ken Watford wrote: On Tue, Jul 20, 2010 at 9:26 PM, Robert Kern wrote: On 7/20/10 8:38 PM, Ken Watford wrote: On Tue, Jul 20, 2010 at 6:58 PM, Stefan Behnel wrote: Ken Watford, 21.07.2010 00:09: Is there any way to expose the PEP 3118 buffer interface for objects that aren't extension types? Given that it's a pure C-level interface, I don't think there would be much use for that. Perhaps, but *why* is it only a pure C-level interface? It's based on/inspired by the array interface, which was not a pure C-level interface. Did they simply neglect to provide the functionality due to lack of obvious use cases, or did they consciously decide to drop thaThat functionality? Lack of obvious use cases. The primary use case is for C extensions to communicate with each other. SWIG is the odd man out in that it does not create true extension types. While the functionality of the PEP derives from numpy's interface, it's inclusion in Python was largely seen as the extension of the older buffer interface which is also a pure C interface. The Python-level __array_interface__ numpy API is not and will not be deprecated despite some outdated language in the documentation. Please continue to use it. Thanks, that's good to know. (Someone should probably do something about the big red box that says the opposite in the current docs). I just did. I'm sorry; we had a discussion about that some time ago, and I thought we had removed it then. I assume the same is true about __array_struct__? Since it *is* an extension despite the non-extension types, filling in a structure is a little more convenient than building the dictionary. Yes, __array_struct__ is also not deprecated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On 7/20/10 9:17 PM, sturlamolden wrote: On 21 Jul, 02:38, Ken Watford wrote: Perhaps, but *why* is it only a pure C-level interface? It is exposed to Python as memoryview. That's not really his question. His question is why there is no way for a pure Python class (like SWIG wrappers) have no way to expose the buffer interface. memoryview allows pure Python code to *consume* the buffer interface but not for pure Python classes to *provide* it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Multidimensional Fitting
I want to fit an n-dimensional distribution with an n-dimensional gaussian. So far i have managed to do this in 2d (see below). I am not sure how to convert this to work in n-dimensions. Using "ravel" on the arrays is not ideal, but optimize does not appear to work on multidimensional arrays. It seems "meshgrid" also does not translate to nd. import numpy as np from scipy import optimize #2D #random points N = 1000 nd = 2 a = np.random.normal(loc=[1.,2.], scale=[1.,2.], size=[N,nd]) #histogram points bins = [10, 20] H, e = np.histogramdd(a, bins=bins) #find center rather than left edge edges = [] for i in range(len(e)): edges.append( e[i][:-1] + np.diff(e[i])/2. ) #not n-dimensional x, y = np.meshgrid(edges[0], edges[1]) x, y = np.ravel(x), np.ravel(y) H = np.ravel(H) #2D gaussian gauss2d = lambda p, x, y: p[0] * np.exp( (-0.5*(x-p[1])**2/p[2]**2) - (0.5*(y-p[3])**2/p[4]**2) ) + p[5] residuals = lambda p, x, y, H: H - gauss2d(p, x, y) #Fitting p0 = [1., 0., 1., 0., 1., 0] plsq, cov_x, infodict, mesg, ier = optimize.leastsq(residuals, p0, args=(x, y, H), full_output=True) #Reading off _x, _y = 0.091, 1.293 print '%.3f %.3f %.4f' % (_x, _y, gauss2d(plsq, _x,_y) / plsq[0]) -- View this message in context: http://old.nabble.com/Multidimensional-Fitting-tp29221343p29221343.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On Jul 20, 6:04 pm, Ken Watford wrote: > On Tue, Jul 20, 2010 at 8:28 PM, Carl Banks wrote: > > On Jul 20, 3:09 pm, Ken Watford wrote: > >> Is there any way to expose the PEP 3118 buffer interface for objects > >> that aren't extension types? > > >> Currently, I can expose the NumPy array interface (using either > >> __array_interface__ or __array_struct__) for any class, extension or > >> otherwise. But I can't find any reference to python-side interfacing > >> for PEP 3118. SWIG makes an extension module for your wrapped code, > >> but not extension *types*, so the classes it produces are pure-python > >> with methods added in from the extension module. > > >> The NumPy array interface works fine for now (especially since NumPy > >> is the only thing I need to consume these objects), but the > >> documentation claims that it's being deprecated in favor of PEP 3118, > >> so I thought it might be relevant to bring this up. > > > Can you tell let us know what you want to use it for? We could offer > > better help. > > > Numpy is generally how I get at buffers in Python 2.x. For instance > > if I have an object m that supports buffer protocol (m could a string, > > mmap object, Python array, etc.), then the following will create an > > array view of the same buffer: > > > numpy.ndarray((10,10),type=numpy.float32,buffer=m) > > > As far as I know this procedure won't be too different under PEP 3118; > > if anything it's simplified in Python 3 since it can discover type and > > shape information itself. (You'll have to check with the numpy people > > on that.) > > I'm not having trouble using buffers, I'm having trouble providing them. I see. > As a part of SWIG-wrapping a larger C++ project, I'm producing some > wrappings for Blitz++ arrays. I can extract the shape and stride > information from the array object to fill either NumPy's or PEP 3118's > appropriate structure. In the case of NumPy, I can easily arrange for > the necessary interface on the proxy object to be fulfilled, because > NumPy doesn't care what kind of object it's attached to. But the PEP > 3118 interface can only be provided by C extension types. Well, let's get the facts straight before proceeding. The buffer protocol has always been for C only. PEP 3118 is an extension to this buffer protocol, so it's also C only. Numpy's array interface is not the buffer protocol, and is specific to numpy (though I guess other types are free to use it among themselves, if they wish). So what you've been doing was *not* to provide buffers; it was to provide some numpy-specific methods that allowed numpy to discover your buffers. That said, your complaint is reasonable. And since the numpy docs say "use PEP 3118 instead of array interface" your misunderstanding is understandable. I assume it's not practical to address your real problem (i.e., SWIG), so we'll instead take it for granted that you have a block of memory in some custom C code that you can't expose as a real buffer. In the past, you've used numpy's special Python methods to let numpy know where your memory block was, but what do you do now? Your suggestion to write a small base class is probably the best you can do as things stand now. Another thing you can do is write a small extension to create a surrogate buffer object (that implements buffer protocol), so if you want to create an ndarray from a buffer you can do it like this: b = create_surrogate_buffer(pointer=_proxy.get_buffer_pointer()) numpy.ndarray(shape=_proxy.get_shape(),type=_proxy.get_type(),buffer=b) Not as convenient to use, but might be simpler to code up than the base-class solution. The numpy developers might be sympathetic to your concerns since you have a reasonable use case (they might also tell you you need to get a real wrapper generator). Just be sure to keep the distinction clear between "buffer protocol" (which is what PEP 3118 is, is C only, and the buffer must be owned by a Python object) and "array interface" (which is what you were using for 2.x, is C or Python, and the buffer can be anywhere). HTH Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Multidimensional Fitting
On 7/20/10 10:13 PM, D2Hitman wrote: I want to fit an n-dimensional distribution with an n-dimensional gaussian. So far i have managed to do this in 2d (see below). I am not sure how to convert this to work in n-dimensions. Using "ravel" on the arrays is not ideal, but optimize does not appear to work on multidimensional arrays. It seems "meshgrid" also does not translate to nd. You will want to ask scipy questions on the scipy mailing list: http://www.scipy.org/Mailing_Lists Don't try to fit a Gaussian to a histogram using least-squares. It's an awful way to estimate the parameters. Just use np.mean() and np.cov() to estimate the mean and covariance matrix directly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: Set Python Path for Idle Mac 10.5
In article
,
neoethical wrote:
> New to programming and after doing some research I've chosed to work with
> Python. One thing that's bothering me is that I would like to set up a
> specific folder in my Documents folder to hold my modules. How do I go about
> doing this? I've found the way to change it for each IDLE session but I'd
> like to make it check the folder automatically. I've done a bunch of
> searches and have come up with nothing helpful.
On OS X, depending on which version of Python you are using, there are
usually two ways to start IDLE: either by launching IDLE.app (often
found in /Applications/Python x.y or /Applications/MacPython x.y) or by
invoking IDLE from the command line (generally something like
/usr/local/bin/idlex.y). When started from a terminal command line,
IDLE uses the current working directory ("folder") as the default for
opening and saving files from the shell window. When you "launch"
IDLE.app (by double-clicking on its icon, for example), it always uses
Documents as the default working directory.
Unfortunately, AFAIK, there is no built-in way to specify a different
default for the shell window, which is a bit annoying. The simplest way
to work around it, I think, would be to always start IDLE from the
command line after changing to the desired default directory. So, from
a terminal session (in Terminal.app or equivalent), something like this
for, say, python3.1 installed from python.org:
cd /path/to/default/directory
/usr/local/bin/idle3.1
The equivalent could be turned into a shell function or alias or an
AppleScript app or Automator action.
>From the command line, you can also give IDLE a list of one or more
files to open, each in its own file window. When the focus is on a file
window, file command such as open and save default to the directory of
the opened file (this is also true in IDLE.app). So you could have
something like this:
cd /path/to/project/directory
/usr/local/bin/idle3.1 my_mod_1.py my_mod_2.py ...
or, if you want to edit all of the python modules in a directory:
cd /path/to/project/directory
/usr/local/bin/idle3.1 *.py
You can achieve a similar effect (except for the shell window) in the
Finder by dragging the files to the IDLE.app icon (in a Finder window or
on the dock). Double-clicking on the .py files themselves can be made
to work but it's a bit of a crap shoot which version of IDLE or other
app might actually be launched; it's best to avoid depending on that
mechanism.
--
Ned Deily,
[email protected]
--
http://mail.python.org/mailman/listinfo/python-list
Re: Trying to redirect every urel request to test.py script with the visitors page request as url parameter.
2010/7/20 Νίκος : > Hello guys! This is my first post in this group! I do not have an answer to your question, other than to suggest you look at (and/or post) relevant lines from Apache's access.log and error.log. I write mostly to say that, in my experience, folks on this list are very helpful, but your post has many, many spelling errors, making it difficult to read. It looks like you are using Gmail, which has a "Check Spelling" drop-down on the compose window. If you re-post with better spelling you might get more responses. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multidimensional Fitting
Robert Kern-2 wrote: > > Don't try to fit a Gaussian to a histogram using least-squares. It's an > awful > way to estimate the parameters. Just use np.mean() and np.cov() to > estimate the > mean and covariance matrix directly. > Ok, what about distributions other than gaussian? Would you use leastsq in that case? If yes, i will post that to the scipy mailing list. -- View this message in context: http://old.nabble.com/Multidimensional-Fitting-tp29221343p29221776.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multidimensional Fitting
On 7/20/10 11:56 PM, D2Hitman wrote: Robert Kern-2 wrote: Don't try to fit a Gaussian to a histogram using least-squares. It's an awful way to estimate the parameters. Just use np.mean() and np.cov() to estimate the mean and covariance matrix directly. Ok, what about distributions other than gaussian? Would you use leastsq in that case? If yes, i will post that to the scipy mailing list. No, you would use a maximum likelihood method. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: linux console command line history
On Jul 20, 11:38 pm, "[email protected]" wrote: > Hi to all, > I 'm writing a linux console app with sockets. It's basically a client > app that fires commands in a server. > For example: > $log user 55 > $sessions list > $server list etc. > What i want is, after entering some commands, to press the up arrow > key and see the previous commands that i have executed. > Any hints? Any examples? > > Antonis You may find interesting to look at the source code for plac (http:// micheles.googlecode.com/hg/plac/doc/plac_adv.html). The readline support (including command history and autocompletion) is implemented in the ReadlineInput class (see http://code.google.com/p/micheles/source/browse/plac/plac_ext.py). If you just want command history you can use rlwrap (http:// freshmeat.net/projects/rlwrap). -- http://mail.python.org/mailman/listinfo/python-list
