Taking into account that I am very new to Python and so must be missing
something important dumping xml.dom and going to lxml made a WORLD of
difference to the performance of the application.
--
http://mail.python.org/mailman/listinfo/python-list
Thanks for the replies. I got my program working but the memory problem
remains. When the program finishes and I am brought back to the PythonWin
the memory is still tied up until I run gc.collect(). While my choice of
platform for XML processing may not be the best one (I will change it later)
Peter Otten wrote:
> Like Gerhard says, in the long run you are probably better off with
> ElementTree.
In the long run it's even better to use lxml [1]. It's the fastest und
most powerful XML library for Python. It also supports element tree.
Christian
[1] http://codespeak.net/lxml/
--
http://
Thanks for the help.
I converted everything into the StringIO() format. Memory is still getting
chewed up. I will look at ElementTree later but for now I believe the speed
issue must be related to the amount of memory that is getting used. It is
causing all of windows to slow to a crawl. gc.coll
Carbon Man wrote:
> Very new to Python, running 2.5 on windows.
> I am processing an XML file (7.2MB). Using the standard library I am
> recursively processing each node and parsing it. The branches don't go
> particularly deep. What is happening is that the program is running really
> really slow
Here's a link for you:
http://wiki.python.org/moin/PythonSpeed/PerformanceTips
which also talks about string concatenation and othere do's and don'ts.
-- Gerhard
--
http://mail.python.org/mailman/listinfo/python-list
Carbon Man wrote:
> Very new to Python, running 2.5 on windows.
> I am processing an XML file (7.2MB). Using the standard library I am
> recursively processing each node and parsing it. The branches don't go
> particularly deep. What is happening is that the program is running really
> really sl
Very new to Python, running 2.5 on windows.
I am processing an XML file (7.2MB). Using the standard library I am
recursively processing each node and parsing it. The branches don't go
particularly deep. What is happening is that the program is running really
really slowly, so slow that even runn
Ken> Unfortunately, Python has some problems in this area. In
Ken> particular, since ubiquitous lists and dictionaries are dynamically
Ken> resized as needed, memory fragmentation seems inevitable.
That's not necessarily true. Also, I would say that Python has made
tradeoffs in this
My beta testers are complaining about excessive memory usage. It's a
wxPython app with several embedded mozilla activex controls and a local
web server.
Unfortunately, Python has some problems in this area. In particular,
since ubiquitous lists and dictionaries are dynamically resized as
ne
Thanks Marc,
I just tried shelve but it is very slow :(
I haven't tried the dbs yet.
Andre
Marc 'BlackJack' Rintsch a écrit :
> On Mon, 15 Oct 2007 11:31:59 +0200, amdescombes wrote:
>
>> Are there any classes that implement disk based dictionaries?
>
> Take a look at the `shelve` module from
On Mon, 15 Oct 2007 11:31:59 +0200, amdescombes wrote:
> Are there any classes that implement disk based dictionaries?
Take a look at the `shelve` module from the standard library.
Or object databases like ZODB or Durus.
Ciao,
Marc 'BlackJack' Rintsch
--
http://mail.python.org/mailman/
Yes, I think that might be the issue, perhaps I could implement the
solution using several dictionaries instead of just one.
Are there any classes that implement disk based dictionaries?
Thanks,
Andre
>
> I don't know whether Python dictionaries must live in a contiguous piece of
> memory, but
AMD <[EMAIL PROTECTED]> wrote:
>
>I do the reading one line at a time, the problem seems to be with the
>dictionary I am creating.
I don't know whether Python dictionaries must live in a contiguous piece of
memory, but if so, that could be the issue. The system DLLs in Server 2003
have been "reb
Hi Brad,
I do the reading one line at a time, the problem seems to be with the
dictionary I am creating.
Andre
> amdescombes wrote:
>> Hi,
>>
>> I am using Python 2.5.1
>> I have an application that reads a file and generates a key in a
>> dictionary for each line it reads. I have managed to r
amdescombes wrote:
> Hi,
>
> I am using Python 2.5.1
> I have an application that reads a file and generates a key in a
> dictionary for each line it reads. I have managed to read a 1GB file and
> generate more than 8 million keys on an Windows XP machine with only 1GB
> of memory and all works
amdescombes wrote:
> Hi,
>
> I am using Python 2.5.1
> I have an application that reads a file and generates a key in a
> dictionary for each line it reads. I have managed to read a 1GB file and
> generate more than 8 million keys on an Windows XP machine with only 1GB
> of memory and all works
Hi,
I am using Python 2.5.1
I have an application that reads a file and generates a key in a
dictionary for each line it reads. I have managed to read a 1GB file and
generate more than 8 million keys on an Windows XP machine with only 1GB
of memory and all works as expected. When I use the same
[Nathan Bates]
> Are the Python developers running Python under Valgrind?
Please read Misc/README.valgrind (in your Python distribution).
--
http://mail.python.org/mailman/listinfo/python-list
Are the Python developers running Python under Valgrind?
If not, FYI, Valgrind is a excellent memory-checker for Linux.
Valgrind is reporting a ton of memory problems.
Worrisome are "Conditional jump or move depends on uninitialised
value(s)" errors.
I simply started the Python 2.4.2 i
Michele Petrazzo wrote:
> [EMAIL PROTECTED] wrote:
> > Michele Petrazzo wrote:
> >> I haven't tried to recompile py 2.4 myself with gcc 4.1 because it
> >> is already compiled with it (4.0.3), so I think (only think) that
> >> is a py 2.5 problem. I'm right? or I have to compile it with
> >> someth
[EMAIL PROTECTED] wrote:
> Michele Petrazzo wrote:
>> I haven't tried to recompile py 2.4 myself with gcc 4.1 because it
>> is already compiled with it (4.0.3), so I think (only think) that
>> is a py 2.5 problem. I'm right? or I have to compile it with
>> something other switches?
>
> Sounds l
Michele Petrazzo wrote:
> Then I execute my test. The memory usage of 2.5a2 and gcc 3.3 that I
> see with "top", is the same (about VIRT: 260 MB and RES: 250MB ) that
> with the py 2.3 and 2.4, but then I recompile with 4.1 and execute
> the same test, my system "stop to work"... with "top" I can
[Michele Petrazzo]
> I'm doing some tests on my debian testing and I see a very strange
> memory problem with py 2.5a2 (just downloaded) and compiled with gcc
> 4.1.0, but not with the gcc 3.3.5:
>
> My test are:
>
> #--test.py
> import sys
> if sys.version.startswith("2.3"):
> from sets import S
Michele Petrazzo wrote:
>
> I haven't tried to recompile py 2.4 myself with gcc 4.1 because it is
> already compiled with it (4.0.3), so I think (only think) that is a py
> 2.5 problem.
> I'm right? or I have to compile it with something other switches?
Sounds like a gcc problem to me. Try adding
Hi list,
I'm doing some tests on my debian testing and I see a very strange
memory problem with py 2.5a2 (just downloaded) and compiled with gcc
4.1.0, but not with the gcc 3.3.5:
My test are:
#--test.py
import sys
if sys.version.startswith("2.3"):
from sets import Set as set
b=set(range(50
[Jeremy Hylton]
> ...
> It looks like your application has a single persistent instance -- the
> root ExtendedTupleTable -- so there's no way for ZODB to manage the
> memory. That object and everything reachable from it must be in memory
> at all times.
Indeed, I tried running this program under
[Jeremy Hylton]
> ...
> The ObjectInterning instance is another source of problem, because it's
> a dictionary that has an entry for every object you touch.
Some vital context was missing in this post. Originally, on c.l.py, DJTB
wasn't using ZODB at all. In effect, he had about 5000 lists each
On 5/21/05, DJTB <[EMAIL PROTECTED]> wrote:
> [posted to comp.lang.python, mailed to [EMAIL PROTECTED]
[Following up to both places.]
> I'm having problems storing large amounts of objects in a ZODB.
> After committing changes to the database, elements are not cleared from
> memory. Since the num
class ExtendedTupleTable(Persistent):
def __init__(self):
self.interning = ObjectInterning()
# This Set stores all generated ExtendedTuple objects.
self.ets = Set() # et(s): ExtendedTuple object(s)
# This dictionary stores a mapping of elements to Sets of
[posted to comp.lang.python, mailed to [EMAIL PROTECTED]
Hi,
I'm having problems storing large amounts of objects in a ZODB.
After committing changes to the database, elements are not cleared from
memory. Since the number of objects I'd like to store in the ZODB is too
large to fit in RAM, my pro
Hi,
I took an example from wxPython with the IE web browser and
created a refresh button to automatically refresh a web page in 5
second intervals. But I notice that the memory utilization in Python
keeps increasing over time. Can anyone tell me why this is happening?
Here is my code:
==
32 matches
Mail list logo