Re: wing ide vs. komodo?
John Salerno wrote: > Just curious what users of the two big commercial IDEs think of them > compared to one another (if you've used both). > > Wing IDE looks a lot nicer and fuller featured in the screenshots, but a > glance at the feature list shows that the "personal" version doesn't > even support code folding! That's a little ridiculous and makes me have > doubts about it. Well I don't know about the personal edition, but I've used Komodo and Wing, and I must say that I chose Wing in the end because it's debugger is so much more robust than komodo. I tried remote debugging mod_python using komodo, and it just choked. I spent a week trying to get it to work. Wing, on the other hand, just worked. I have only the highest praise for the Wing IDE Debugger, once you get to know it, it's so much more powerful than Komodo's. The time saved over Komodo was well worth the money for the professional edition. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: IDE
giuseppe wrote: > What is the better IDE software for python programming? > One word. Wing. The debugger will pay for itself within weeks. There is no better Python debugger for most situations. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Exploiting Dual Core's with Py_NewInterpreter's separated GIL ?
On Nov 2, 1:32 pm, robert <[EMAIL PROTECTED]> wrote: > I'd like to use multiple CPU cores for selected time consuming Python > computations (incl. numpy/scipy) in a frictionless manner. > > Interprocess communication is tedious and out of question, so I thought about > simply using a more Python interpreter instances (Py_NewInterpreter) with > extra GIL in the same process. Why not use IronPython? It's up to date (unlike Jython), has no GIL, and is cross-platform wherever you can get .NET or Mono (UNIX, macs, windows) and you can use most any scientific libraries written for the .NET/Mono platform (there's a lot) Take a look anyway. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: all ip addresses of machines in the local network
damacy wrote: > hi, there. i have a problem writing a program which can obtain ip > addresses of machines running in the same local network. > > say, there are 4 machines present in the network; [a], [b], [c] and [d] > and if i run my program on [a], it should be able to find "host names" > and "ip addresses" of the other machines; [b], [c] and [d]? > > i have read some threads posted on this group, however, they only work > for localhost, not the entire network. > > any hints if possible? > > thanks for your time. > > regards, damacy What is this for? Some kind of high availablity server setup? I don't know anything that would be useful to you, but I am curious, and maybe it will clarify your intentions for others. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: threading support in python
The trouble is there are some environments where you are forced to use threads. Apache and mod_python are an example. You can't make use of mutliple CPUs unless you're on *nux and run with multiple processes AND you're application doesn't store large amounts of data in memory (which mine does) so you'd have to physically double the computer's memory for a daul-core, or quadruple it for a quadcore. And forget about running a windows server, apache will not even run with multiple processes. In years to come this will be more of an issue because single core CPUs will be harder to come by, you'll be throwing away half of every CPU you buy. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Prevent self being passed to a function stored as a member variable?
How can you prevent self from being passed to a function stored as a member variable? class Foo(object): def __init__(self, callback): self.func = callback f =Foo(lambda x: x) f.func(1) # TypeError, func expects 1 argument, recieved 2 I thought maybe you could do this: class Foo(object): def __init__(self, callback): self.func = staticmethod(callback) # Error, staticmethod not callable Somebody (maybe everybody other than myself) here must know? Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Prevent self being passed to a function stored as a member variable?
Qiangning Hong wrote: > Do you really get that error? Sorry, my bad. You're correct of course. I had accidentally passed an object, by naming it the same as the function, instead of my function, and the object had __call__ defined, and took exactly two parameters, just like my function, but one of them beign self. So when I passed two arguments, it got three, and that's how I was confused into thinking self was being passed to my function (because it was, but not to my function.) All a big foulup on my part. I might have worked on that problem for a while before figuring it out, thanks! -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: threading support in python
> You seem to be confused about the nature of multiple-process > programming. > > If you're on a modern Unix/Linux platform and you have static read-only > data, you can just read it in before forking and it'll be shared > between the processes.. Not familiar with *nix programming, but I'll take your word on it. > If it's read/write data or you're not on a Unix platform, you can use > shared memory to shared it between many processes. I know how shared memory works, it's the last resort in my opinion. > Threads are way overused in modern multiexecution programming. The > It used to run on windows with multiple processes. If it really won't > now, use an older version or contribute a fix. First of all I'm not in control of spawning processes or threads. Apache does that, and apache has no MPM for windows that uses more than 1 process. Secondly "Superior" is definately a matter of opinion. Let's see how you would define superior. 1) Port (a nicer word for rewrite) the worker MPM from *nix to Windows. 2) Alternately switch to running Linux servers (which have their plusses) but about which I know nothing. I've been using Windows since I was 10 years old, I'm confident in my ability to build, secure, and maintain a Windows server. I don't think anyone would recommend me to run Linux servers with very little in the way of Linux experience. 3) Rewrite my codebase to use some form of shared memory. This would be a terrible nightmare that would take at least a month of development time and a lot of heavy rewriting. It would be very difficult, but I'll grant that it may work if done properly with only small performance losses. Sounds like a deal. I would find an easier time, I think, porting mod_python to .net and leaving that GIL behind forever. Thankfully, I'm not considering such drastic measures - yet. Why on earth would I want to do all of that work? Just because you want to keep this evil thing called a GIL? My suggestion is in python 3 ditch the ref counting, use a real garbage collector, and make that GIL walk the plank. I have my doubts that it would happen, but that's fine, the future of python is in things like IronPython and PyPy. CPython's days are numbered. If there was a mod_dotnet I wouldn't be using CPython anymore. > Now, the GIL is independent of this; if you really need threading in > your situation (you share almost everything and have hugely complex > data structures that are difficult to maintain in shm) then you're > still going to run into GIL serialization. If you're doing a lot of > work in native code extensions this may not actually be a big > performance hit, if not it can be pretty bad. Actually, I'm not sure I understand you correctly. You're saying that in an environment like apache (with 250 threads or so) and my hugely complex shared data structures, that the GIL is going to cause a huge performance hit? So even if I do manage to find my way around in the Linux world, and I upgrade my memory, I'm still going to be paying for that darned GIL? Will the madness never end? -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: testing for valid reference: obj vs. None!=obs vs. obj is not None
alf wrote: > Hi, > > I have a reference to certain objects. What is the most pythonic way to > test for valid reference: > > if obj: > > if None!=obs: > > if obj is not None: I like this way the most. I used timeit to benchmark this against the first one, expecting it to be faster (the first is a general false test, the last should just be comparing pointers) but it's slower. Still I don't expect that to ever matter, so I use it wherever I wish to test for None, it reads the best of all of them. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Testing a website with HTTPS login and cookies
Hari Sekhon wrote: > If anybody knows how to do this could they please give me a quick > pointer and tell me what libraries I need to go read up on? > One word. Selenium. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: threading support in python
Steve Holden wrote: > Quite right too. You haven't even sacrificed a chicken yet ... Hopefully we don't get to that point. > You write as though the GIL was invented to get in the programmer's way, > which is quite wrong. It's there to avoid deep problems with thread > interaction. Languages that haven't bitten that bullet can bite you in > quite nasty ways when you write threaded applications. I know it was put there because it is meant to be a good thing. However, it gets in my way. I would be perfectly happy if it were gone. I've never written code that assumes there's a GIL. I always write my code with all shared writable objects protected by locks. It's far more portable, and a good habit to get into. You realize that because of the GIL, they were discussing (and may have already implemented) Java style synchronized dictionaries and lists for IronPython simply because python programmers just assume they are thread safe thanks to the GIL. I always hated that about Java. If you want to give me thread safe collections, fine, they'll be nice for sharing between threads, but don't make me use synchronized collections for single-threaded code. You'll notice the newer Java collections are not synchronized, it would seem I'm not alone in that opinion. > Contrary to your apparent opinion, the GIL has nothing to do with > reference-counting. Actually it does. Without the GIL reference counting is not thread safe. You have to synchronize all reference count accesses, increments, and decrements because you have no way of knowing which objects get shared across threads. I think with Python's current memory management, the GIL is the lesser evil. I'm mostly writing this to provide a different point of view, many people seem to think (previously linked blog) that there is no downside to the GIL, and that's just not true. However, I don't expect that the GIL can be safely removed from CPython. I also think that it doesn't matter because projects like IronPython and PyPy are very likely the way of the future for Python anyway. Once you move away from C there are so many more things you can do. > I think the suggestion was rather that abandoning Python because of the > GIL might be premature optimisation. But since you appear to be sticking > with it, that might have been unnecessary advice. I would never abandon Python, and I hold the development team in very high esteem. That doesn't mean there's a few things (like the GIL, or super) that I don't like. But overall they've done an excellent job on the 99% of things the've got right. I guess we don't say that enough. I might switch from CPython sometime to another implementation, but it won't be because of the GIL. I'm very fond of the .net framework as a library, and I'd also rather write performance critical code in C# than C (who wouldn't?) I'm also watching PyPy with interest. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: threading support in python
Felipe Almeida Lessa wrote: > 4 Sep 2006 19:19:24 -0700, Sandra-24 <[EMAIL PROTECTED]>: > > If there was a mod_dotnet I wouldn't be using > > CPython anymore. > > I guess you won't be using then: http://www.mono-project.com/Mod_mono > Oh I'm aware of that, but it's not what I'm looking for. Mod_mono just lets you run ASP.NET on Apache. I'd much rather use Python :) Now if there was a way to run IronPython on Apache I'd be interested. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: threading support in python
[EMAIL PROTECTED] wrote: > You can do the same on Windows if you use CreateProcessEx to create the > new processes and pass a NULL SectionHandle. I don't think this helps > in your case, but I was correcting your impression that "you'd have to > physically double the computer's memory for a dual core, or quadruple > it for a quadcore". That's just not even near true. Sorry, my bad. What I meant to say is that for my application I would have to increase the memory linearly with the number of cores. I have about 100mb of memory that could be shared between processes, but everything else would really need to be duplicated. > As I said, Apache used to run on Windows with multiple processes; using > a version that supports that is one option. There are good reasons not > to do that, though, so you could be stuck with threads. I'm not sure it has done that since the 1.3 releases. mod_python will work for that, but involves going way back in it's release history as well. I really don't feel comfortable with that, and I don't doubt I'd give up a lot of things I'd miss. > Having memory protection is superior to not having it--OS designers > spent years implementing it, why would you toss out a fair chunk of it? > Being explicit about what you're sharing is generally better than not. Actually, I agree. If shared memory will prove easier, then why not use it, if the application lends itself to that. > But as I said, threads are a better solution if you're sharing the vast > majority of your memory and have complex data structures to share. > When you're starting a new project, really think about whether they're > worth the considerable tradeoffs, though, and consider the merits of a > multiprocess solution. There are merits, the GIL being one of those. I believe I can fairly easily rework things into a multi-process environment by duplicating memory. Over time I can make the memory usage more efficient by sharing some data structures out, but that may not even be necessary. The biggest problem is learning my way around Linux servers. I don't think I'll choose that option initially, but I may work on it as a project in the future. It's about time I got more familiar with Linux anyway. > It's almost certainly not worth rewriting a large established > codebase. Lazy me is in perfect agreement. > I disagree with this, though. The benefits of deterministic GC are > huge and I'd like to see ref-counting semantics as part of the language > definition. That's a debate I just had in another thread, though, and > don't want to repeat. I just took it for granted that a GC like Java and .NET use is better. I'll dig up that thread and have a look at it. > I didn't say that. It can be a big hit or it can be unnoticeable. It > depends on your application. You have to benchmark to know for sure. > > But if you're trying to make a guess: if you're doing a lot of heavy > lifting in native modules then the GIL may be released during those > calls, and you might get good multithreading performance. If you're > doing lots of I/O requests the GIL is generally released during those > and things will be fine. If you're doing lots of heavy crunching in > Python, the GIL is probably held and can be a big performance issue. I don't do a lot of work in native modules, other than the standard library things I use, which doesn't count as heavy lifting. However I do a fair amount of database calls, and either the GIL is released by MySQLdb, or I'll contribute a patch so that it is. At any rate, I will measure, and I suspect the GIL will not be an issue. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Cross-process dictionary/hashtable
A dictionary that can be shared across processes without being marshaled? Is there such a thing already for python? If not is there one for C maybe? I was just thinking how useful such a thing could be. It's a great way to share things between processes. For example I use a cache that subclasses a dictionary. It would be trivial to modify it to work across processes by changing the base class and the locking mechanism. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Cross-process dictionary/hashtable
I looked at posh, and read the report on it, it's very interesting, but it will not work for me. Posh requires that it forks the processes, but in mod_python the processes were forked by apache and use different interpreters. Calvin Spealman wrote: > Maybe what you want is something like memcache > (http://cheeseshop.python.org/pypi/memcached), which offers a basic > in-memory, key-value share that processes (even on different boxes) > can connect to. Of course, as with any kind of concurrent work, its > going to be far easier to have some restrictions, which memcache has. > For example, full python objects being shared isn't a great idea, or > even possible in many situations. The closest you could get with > something like memcache is to wrap it up in a dictionary-like object > and have it pickle things coming in and out, but that won't work for > everying and has security concerns. Memcached looks like it will do the job. Thanks! -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: wxPython help please
Try the wxPython mailing list, which you can find on their site. And the best wxPython reference is the book (also available as an e-book) by Robin Dunn, who created wxPython. Seeing wxPython from his perspective is well worth the money. If I recall correctly he devoted an entire chapter to drawing with a canvas widget. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
How to test if two strings point to the same file or directory?
Comparing file system paths as strings is very brittle. Is there a better way to test if two paths point to the same file or directory (and that will work across platforms?) Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: How to test if two strings point to the same file or directory?
On Dec 16, 8:30 pm, Steven D'Aprano <[EMAIL PROTECTED]> wrote: > On Sat, 16 Dec 2006 17:02:04 -0800, Sandra-24 wrote: > > Comparing file system paths as strings is very brittle.Why do you say that? > > Are you thinking of something like this? > > /home//user/somedirectory/../file > /home/user/file > > Both point to the same file. > > > Is there a > > better way to test if two paths point to the same file or directory > > (and that will work across platforms?)How complicated do you want to get? > > If you are thinking about aliases, > hard links, shortcuts, SMB shares and other complications, I'd be > surprised if there is a simple way. So would I. Maybe it would make a good addition to the os.path library? os.path.isalias(path1, path2) > But for the simple case above: > > >>> import os > >>> path = '/home//user/somedirectory/../file' > >>> os.path.normpath(path)'/home/user/file' The simplest I can think of that works for me is: def isalias(path1, path2): ... return os.path.normcase(os.path.normpath(path1)) == os.path.normcase(os.path.normpath(path2)) But that won't work with more complicated examples. A common one that bites me on windows is shortening of path segments to 6 characters and a ~1. -Dan -- http://mail.python.org/mailman/listinfo/python-list
Re: wxPython help please
On Dec 16, 8:43 pm, Jive Dadson <[EMAIL PROTECTED]> wrote: >I bought the ebook. Searching for "pixel", all I came up with was a > method called GetPixel in a "device context." I know there must be a > device context buried in there somewhere, so now I need to winkle it out. You are right that you need to capture the mouse down event for the canvas and get the cursor position from there. To get the rgb value for that pixel may require the dc, I don't remember enough about them to say for sure. You might have better luck on the mailing list, it's a high volume list and you'll likely get a good response there. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Good Looking UI for a stand alone application
On 12/16/06, The Night Blogger <[EMAIL PROTECTED]> wrote: > Can someone recommend me a good API for writing a sexy looking (Rich UI like > WinForms) shrink wrap application > My requirement is that the application needs to look as good on Windows as > on the Apple Mac wxPython or something layered on it would be the way to go. I tried all the popular toolkits (except qt) and nothing else comes close for cross platform gui work. Don't let people persuade you otherwise, that caused me a lot of trouble. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: How to test if two strings point to the same file or directory?
It looks like you can get a fairly good apporximation for samefile on win32. Currently I'm using the algorithm suggested by Tim Chase as it is "good enough" for my needs. But if one wanted to add samefile to the ntpath module, here's the algorithm I would suggest: If the passed files do not exist, apply abspath and normcase to both and return the result of comparing them for equality as strings. If both paths pass isfile(), try the mecahnism linked to in this thread which opens both files at the same time and compares volume and index information. Return this result. If that raises an error (maybe they will not allow us to open them) Try comparing them using the approach suggested by Tim Chase, but if that works there should be some indication given that the comparison may not be accurate (raise a warning?). If that also fails, raise an error. This should allow samfile to be used on win32 in well over 99.9% of cases with no surprises. For the rest it will either return a result that is likely correct with a warning of some kind, or it will fail entirely. It's not perfect, but you can't acheive perfection here. It would, however, have far less surprises than newbies using == to compare paths for equality. And it would also make os.path.samefile available on another platform. os.path.sameopenfile could be implemented perfectly using the comparison of volume and index information alone (assuming you can get a win32 handle out of the open file object, which I think you can) If someone would be willing to write a patch for the ntpath tests I would be willing to implement samefile as described above or as agreed upon in further discussion. Then we can submit it for inclusion in the stdlib. -- http://mail.python.org/mailman/listinfo/python-list
Why can't you use varargs and keyword arguments together?
I've always wondered why I can't do:
def foo(a,b,c):
return a,b,c
args = range(2)
foo(*args, c = 2)
When you can do:
foo(*args, **{'c':2})
Whenever I stub my toe on this one, I always just use the second
approach, which seems less readable. As with most things in Python,
I've suspected there's a good reason for it. Having just bumped into
this one again, I thought I'd ask if anyone knows why the first syntax
should not be allowed.
This comes up anyplace you need variable arguments and keyword
arguments together but don't have the keyword arguemnts in a dict. In
this case you are forced to put them in a dict. I don't think anyone
would find that to be more readable.
Thanks and Merry Christmas,
-Sandra
--
http://mail.python.org/mailman/listinfo/python-list
Re: Building python C++ extension modules using MS VC++ 2005?
You can use 2005 to build extensions for Python 2.5. I've done this with several extensions, both my own and others. I do not know if you can use it for Python 2.4, so I won't advise you on that. I thought Microsoft made its C/C++ compiler, version 7.1 (2003) freely available as a command line tool. If you can't find it on their site, ask around, I'm sure a lot of people have it. Probably you'll also find someone has put it up somewhere if you search on google. Try 2005 first and see what happens though. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Why can't you use varargs and keyword arguments together?
On Dec 21, 5:59 pm, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote: >You just need to turn things around: > >>> def foo(a, b, c): > ... return a, b, c > ... > >>> args = range(2) > >>> foo(c=2, *args) > (0, 1, 2) > >>> You know, I feel like a real shmuck for not trying that... Thanks! -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Modify the local scope inside a function
Is there a way in python to add the items of a dictionary to the local function scope? i.e. var_foo = dict['var_foo']. I don't know how many items are in this dictionary, or what they are until runtime. exec statements are difficult for debuggers to deal with, so as a workaround I built my code into a function and saved it in a .py file. The I load the .py file as a module and call the function instead. This works great, and it has the added advantage of precompiled versions of the code being saved as .pyc and .pyo files. (faster repeated execution) The only trouble was I execed inside a specially created scope dictionary containing various variables and functions that the code requires. I can't seem to figure out how to get this same effect inside the function. Right now I'm passing the dict as an argument to the function, but I can't modify locals() so it doesn't help me. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Modify the local scope inside a function
Hey Crutcher, thanks for the code, that would work. I'm now debating using that, or using function arguments to get the variables into the namespace. This would require knowing the variables in the dict ahead of time, but I suppose I can do that because it's part of the same system that creates the dict. I'm just not very fond of having code relating to one thing in more than one place, because it puts the onus on the programmer to remember to change it in both places. Here I might forgive it because it would make the generated code more readable. It seems I created a fair amount of confusion over what I'm trying to do. I use special psp like templates in my website. The template engine was previously execing the generated template code. It uses special environment variables that give it access to the functionality of the web engine. These are what are in that scope dictionary of mine, and why I exec the code in that scope. However, I want to integrate a debugger with the web engine now, and debugging execed generated code is a nightmare. So I save the generated code as a function in a module that is generated by the template engine. Unless I'm missing something about what you're saying, this should now be faster as well, because afaik execed code has to be compiled on the spot, wheras a module when you load it, is compiled (or loaded from a .pyc file) at import time. So one import and repeated function calls would be cheaper than repeated exec. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
''.join() with encoded strings
I'd love to know why calling ''.join() on a list of encoded strings automatically results in converting to the default encoding. First of all, it's undocumented, so If I didn't have non-ascii characters in my utf-8 data, I'd never have known until one day I did, and then the code would break. Secondly you can't override (for valid reasons) the default encoding, so that's not a way around it. So ''.join becomes pretty useless when dealing with the real (non-ascii) world. I won't miss the str class when it finally goes (in v3?). How can I join my encoded strings effeciently? Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: ''.join() with encoded strings
Sorry, this was my mistake, I had some unicode strings in the list without realizing it. I deleted the topic within 10 minutes, but apparently I wasn't fast enough. You're right join works the way it should, I just wasn't aware I had the unicode strings in there. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Opening files without closing them
I was reading over some python code recently, and I saw something like this: contents = open(file).read() And of course you can also do: open(file, "w").write(obj) Why do they no close the files? Is this sloppy programming or is the file automatically closed when the reference is destroyed (after this line)? I usually use: try: f = open(file) contents = f.read() finally: f.close() But now I am wondering if that is the same thing. Which method would you rather use? Why? Thanks, Sandra -- http://mail.python.org/mailman/listinfo/python-list
Hacking with __new__
Ok here's the problem, I'm modifying a 3rd party library (boto) to have more specific exceptions. I want to change S3ResponseError into about 30 more specific errors. Preferably I want to do this by changing as little code as possible. I also want the new exceptions to be a subclass of the old S3ResponseError so as to not break old code that catches it. If I meet these two requirements I expect my modification to make it into boto and then I won't have to worry about maintaining a seperate version. So thinking myself clever with python I thought I could change S3ResponseError to have a __new__ method which returns one of the 30 new exceptions. That way none of the raise S3ResponseError code needs changing. No problem. The trouble comes with those exceptions being subclasses of S3ResponseError, because __new__ is called again and goofs everything up. I think there may be a way to solve this but after playing around in the shell for a while, I give up. I'm less concerned with the original problem than I am curious about the technical challenge. Can anyone tell me if it's possible to do meet both of my requirements? Thanks, -Sandra Here's my shell code if you want to play with it too (Bar is S3ResponseError, Zoo is a more specific error, Foo is just the base class of Bar.) >>> class Foo(object): ... def __new__(cls, *args): ... print 'Foo.__new__', len(args) ... return super(Foo, cls).__new__(cls, *args) ... ... def __init__(self, a, b, c): ... print 'Foo.__init__', 3 ... self.a = a ... self.b = b ... self.c = c ... >>> class Bar(Foo): ... def __new__(cls, a, b, c, *args): ... print 'Bar.__new__', len(args) ... if args: ... return super(Bar, cls).__new__(cls, a, b, c, *args) ... ... return Zoo(a, b, c, 7) ... >>> class Zoo(Bar): ... def __init__(self, a, b, c, d): ... print 'Zoo.__init__', 4 ... Foo.__init__(self, a, b, c) ... self.d = d ... >>> Bar(1,2,3) Bar.__new__ 0 Bar.__new__ 1 Foo.__new__ 4 Zoo.__init__ 4 Foo.__init__ 3 Traceback (most recent call last): File "", line 1, in TypeError: __init__() takes exactly 5 arguments (4 given) -- http://mail.python.org/mailman/listinfo/python-list
Re: Hacking with __new__
On Jul 24, 5:20 am, Bruno Desthuilliers wrote: > IIRC, __new__ is supposed to return the newly created object - which you > are not doing here. > > class Bar(Foo): > def __new__(cls, a, b, c, *args): > print 'Bar.__new__', len(args) > if not args: > cls = Zoo > obj = super(Bar, cls).__new__(cls, a, b, c, *args) > if not args: > obj.__init__(a, b, c, 7) > return obj Thanks guys, but you are right Bruno, you have to return the newly created object or you get: >>> b = Bar(1,2,3) Bar.__new__ 0 Foo.__new__ 3 Zoo.__init__ 4 Foo.__init__ 3 >>> b is None True However, if you return the object you get: >>> b = Bar(1, 2, 3) Bar.__new__ 0 Foo.__new__ 3 Zoo.__init__ 4 Foo.__init__ 3 Traceback (most recent call last): File "", line 1, in TypeError: __init__() takes exactly 5 arguments (4 given) Which is the same blasted error, because it seems to want to call init on the returned object and it's calling it with 4 args :( Is there any way around that? Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Can you create an instance of a subclass with an existing instance of the base class?
Can you create an instance of a subclass using an existing instance of the base class? Such things would be impossible in some languages or very difficult in others. I wonder if this can be done in python, without copying the base class instance, which in my case is a very expensive object. Any ideas? Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Can you create an instance of a subclass with an existing instance of the base class?
Now that is a clever little trick. I never would have guessed you can
assign to __class__, Python always surprises me in it's sheer
flexibility.
In this case it doesn't work.
TypeError: __class__ assignment: only for heap types
I suspect that's because this object begins its life in C code.
The technique of using the __class__.__subclasses__ also fails:
TypeError: cannot create 'B' instances
This seems more complex than I thought. Can one do this for an object
that beings it's life in C?
Thanks,
-Sandra
Peter Otten wrote:
> Sandra-24 wrote:
>
> > Can you create an instance of a subclass using an existing instance of
> > the base class?
> >
> > Such things would be impossible in some languages or very difficult in
> > others. I wonder if this can be done in python, without copying the
> > base class instance, which in my case is a very expensive object.
>
> You can change the class of an instance by assigning to the __class__
> attribute. The new class doesn't even need to be a subclass of the old:
>
> >>> class A(object):
> ... def __init__(self, name):
> ... self.name = name
> ... def show(self): print self.name
> ...
> >>> a = A("alpha")
> >>> a.show()
> alpha
> >>> class B(object):
> ... def show(self): print self.name.upper()
> ...
> >>> a.__class__ = B
> >>> a.show()
> ALPHA
>
> Peter
--
http://mail.python.org/mailman/listinfo/python-list
Re: Can you create an instance of a subclass with an existing instance of the base class?
Lawrence D'Oliveiro wrote: > In article <[EMAIL PROTECTED]>, > "Sandra-24" <[EMAIL PROTECTED]> wrote: > > >Now that is a clever little trick. I never would have guessed you can > >assign to __class__, Python always surprises me in it's sheer > >flexibility. > > That's because you're still thinking in OO terms. It's not quite as simple as all that. I agree that people, escpecially people with a Java (ew) background overuse OO, when there's often simpler ways of doing things. However in this case I'm simply getting an object (an mp_request object from mod_python) passed into my function, and before I pass it on to the functions that make up and individual web page it is modified by adding members and methods to add functionality. It's not that I'm thinking in OO, but that the object is a convienient place to put things, especially functions that take an mp_request object as their first argument. Sadly I'm unable to create it as a python object first, because it's created by the time my code comes into play. So I have to resort to using the new module to add methods. It works, but it has to be redone for every request, I thought moving the extra functionality to another object would simplify the task. A better way might be to contain the mp_request within another object and use __getattr__ to lazily copy the inner object. I'd probably have to first copy those few fields that are not read-only or use __setattr__ as well. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Can you create an instance of a subclass with an existing instance of the base class?
Lawrence D'Oliveiro wrote: > > All you want is a dictionary, then. That's basically what Python objects > are. Yes, that's it exactly. I made a lazy wrapper for it, and I was really happy with what I was able to accomplish, it turned out to be very easy. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Why did someone write this?
try: exc_type, exc_value, exc_traceback = sys.exc_info() # Do something finally: exc_traceback = None Why the try/finally with setting exc_traceback to None? The python docs didn't give me any clue, and I'm wondering what this person knows that I don't. Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: Why did someone write this?
I can't believe I missed it in the documentation. Maybe it wasn't in the offline version I was using, but more likely it was just one of those things. So the trouble seems to be that the traceback holds a reference to the frame where the exception occurred, and as a result a local variable that references the traceback in that frame now holds a reference to it's own frame (preventing the frame from being recliamed) and can't be cleaned up prior to python 2.2 with GC enabled. Which means it's fine to hold a reference to the traceback in a frame not in the traceback, or (I think) to create a temporary unamed reference to it. Thanks for your help guys! -Sandra -- http://mail.python.org/mailman/listinfo/python-list
How to determine if a line of python code is a continuation of the line above it
I'm not sure how complex this is, I've been brainstorming a little, and I've come up with: If the previous line ended with a comma or a \ (before an optional comment) That's easy to cover with a regex But that doesn't cover everything, because this is legal: l = [ 1, 2, 3 ] and with dictionaries and tuples as well. Not sure how I would check for that programmatically yet. Is there any others I'm missing? Thanks, -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: How to determine if a line of python code is a continuation of the line above it
No it's not an academic excercise, but your right, the situation is more complex than I originally thought. I've got a minor bug in my template code, but it'd cause more trouble to fix than to leave in for the moment. Thanks for your input! -Sandra -- http://mail.python.org/mailman/listinfo/python-list
Re: how relevant is C today?
C/C++ is used for a lot of things and not going anywhere. I recommend you learn it not because you should create applications in C or C++, but because it will increase your skills and value as a programmer. I recommend you even spend a few weeks with an assembly language, for the same reason. However, when it comes to beginning new things with an eye for getting the job done, C/C++ (or Java for that matter...) is usually a bad idea. That having been said, there are always exceptions to the rule and you'll learn better how to call things as you advance your skills as a programmer. There are also sometimes parts of your application that just cannot be optimized any more in a high level language, and might benefit from being converted to C or C++. But do yourself a favor and only do such things after taking careful measurements and exhausting other options. Many time consuming algorithms don't gain a noticable speed improvement in lower level languages. -Sandra -- http://mail.python.org/mailman/listinfo/python-list
