> I had the (mis)pleasure of dealing with a multi-terabyte postgresql
> instance many years ago and figuring out why random scripts were eating
> up system memory became quite common.
>
> All of our "ETL" scripts were either written in Perl, Java, or Python
> but the results were always the sa
On 3/29/21 5:12 AM, Alexey wrote:
Hello everyone!
I'm experiencing problems with memory consumption.
I have a class which is doing ETL job. What`s happening inside:
- fetching existing objects from DB via SQLAchemy
- iterate over raw data
- create new/update existing objects
- c
четверг, 1 апреля 2021 г. в 15:56:23 UTC+3, Marco Ippolito:
> > > Are you running with systemd?
> >
> > I really don't know.
> An example of how to check:
>
> ```
> $ readlink /sbin/init
> /lib/systemd/systemd
> ```
>
> You want to check which program runs as PID 1.
Thank you Marco
--
h
четверг, 1 апреля 2021 г. в 15:46:21 UTC+3, Marco Ippolito:
> I suspect the high watermark of `` needs to be reachable still and,
> secondly, that a forceful constraint whilst running would crash the
> container?
Exactly.
--
https://mail.python.org/mailman/listinfo/python-list
четверг, 1 апреля 2021 г. в 17:21:59 UTC+3, Mats Wichmann:
> On 4/1/21 5:50 AM, Alexey wrote:
> > Found it. As I said before the problem was lurking in the cache.
> > Few days ago I read about circular references and things like that and
> > I thought to myself that it might be the case. To buil
четверг, 1 апреля 2021 г. в 16:02:15 UTC+3, Barry:
> > On 1 Apr 2021, at 13:46, Marco Ippolito wrote:
> >
> >
> >>
> What if you increase the machine's (operating system's) swap space? Does
> that take care of the problem in practice?
> >>>
> >>> I can`t do that because it will aff
четверг, 1 апреля 2021 г. в 15:27:01 UTC+3, Chris Angelico:
> On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote:
> >
> > Found it. As I said before the problem was lurking in the cache.
> > Few days ago I read about circular references and things like that and
> > I thought to myself that it might
On 4/1/21 5:50 AM, Alexey wrote:
Found it. As I said before the problem was lurking in the cache.
Few days ago I read about circular references and things like that and
I thought to myself that it might be the case. To build the cache I was
using lots of 'setdefault' methods chained together
s
> On 1 Apr 2021, at 13:46, Marco Ippolito wrote:
>
>
>>
What if you increase the machine's (operating system's) swap space? Does
that take care of the problem in practice?
>>>
>>> I can`t do that because it will affect other containers running on this
>>> host.
>>> In my opinion i
> > Are you running with systemd?
>
> I really don't know.
An example of how to check:
```
$ readlink /sbin/init
/lib/systemd/systemd
```
You want to check which program runs as PID 1.
```
ps 1
```
--
https://mail.python.org/mailman/listinfo/python-list
четверг, 1 апреля 2021 г. в 14:57:29 UTC+3, Barry:
> > On 31 Mar 2021, at 09:42, Alexey wrote:
> >
> > среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
> >>> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
> >>>
> >>>
> >>> I'm sorry. I didn't understand your question right. If I have
> >> What if you increase the machine's (operating system's) swap space? Does
> >> that take care of the problem in practice?
> >
> > I can`t do that because it will affect other containers running on this
> > host.
> > In my opinion it may significantly reduce their performance.
>
> Assuming thi
On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote:
>
> Found it. As I said before the problem was lurking in the cache.
> Few days ago I read about circular references and things like that and
> I thought to myself that it might be the case. To build the cache I was
> using lots of 'setdefault' methods
> On 31 Mar 2021, at 09:42, Alexey wrote:
>
> среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
>>> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>>>
>>>
>>> I'm sorry. I didn't understand your question right. If I have 4 workers,
>>> they require 4Gb
>>> in idle state and some ex
Found it. As I said before the problem was lurking in the cache.
Few days ago I read about circular references and things like that and
I thought to myself that it might be the case. To build the cache I was
using lots of 'setdefault' methods chained together
self.__cache.setdefault(cluster_name,
On 31/03/2021 09:35, Alexey wrote:
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
What if you increase the machine's (operating system's) swap space? Does
that take care of the problem in practice?
I can`t do that because it will affect other containers running on this host.
In my
среда, 31 марта 2021 г. в 18:17:46 UTC+3, Dieter Maurer:
> Alexey wrote at 2021-3-31 02:43 -0700:
> >среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> > ...
> >> You can get some hints from sys._debugmallocstats(). It prints
> >> obmalloc (allocator for small objects) stats to stderr.
> >
Alexey wrote at 2021-3-31 02:43 -0700:
>среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> ...
>> You can get some hints from sys._debugmallocstats(). It prints
>> obmalloc (allocator for small objects) stats to stderr.
>> Try printing stats before and after 1st run, and after 2nd run. And
>>
среда, 31 марта 2021 г. в 14:16:30 UTC+3, Inada Naoki:
> > ** Before first run:
> > # arenas allocated total = 776
> > # arenas reclaimed = 542
> > # arenas highwater mark = 234
> > # arenas allocated current = 234
> > 234 arenas * 262144 bytes/arena = 61,341,696
> > ** After fi
> ** Before first run:
> # arenas allocated total = 776
> # arenas reclaimed = 542
> # arenas highwater mark= 234
> # arenas allocated current = 234
> 234 arenas * 262144 bytes/
среда, 31 марта 2021 г. в 11:52:43 UTC+3, Marco Ippolito:
> > > At which point does the problem start manifesting itself?
> > The problem spot is my cache(dict). I simplified my code to just load
> > all the objects to this dict and then clear it.
> What's the memory utilisation just _before_ per
среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> First of all, I recommend upgrading your Python. Python 3.6 is a bit old.
I was thinking about that.
> As you saying, Python can not return the memory to OS until the whole
> arena become unused.
> If your task releases all objects alloc
среда, 31 марта 2021 г. в 05:45:27 UTC+3, [email protected]:
> Since everyone is talking about vague OS memory use and not at all about
> working set size of Python objects, let me ...
> On 29Mar2021 03:12, Alexey wrote:
> >I'm experiencing problems with memory consumptio
> > At which point does the problem start manifesting itself?
> The problem spot is my cache(dict). I simplified my code to just load
> all the objects to this dict and then clear it.
What's the memory utilisation just _before_ performing this load? I am assuming
it's much less than this 1 GB you
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> >
> > I'm sorry. I didn't understand your question right. If I have 4 workers,
> > they require 4Gb
> > in idle state and some extra memory when they execute other tasks. If I
> > inc
вторник, 30 марта 2021 г. в 18:43:54 UTC+3, Alan Gauld:
> On 29/03/2021 11:12, Alexey wrote:
> The first thing you really need to tell us is which
> OS you are using? Memory management varies wildly
> depending on OS. Even different flavours of *nix
> do it differently.
I'm using Ubuntu(5.8.
вторник, 30 марта 2021 г. в 18:43:51 UTC+3, Marco Ippolito:
> Have you tried to identify where in your code the surprising memory
> allocations
> are made?
Yes.
> You could "bisect search" by adding breakpoints:
>
> https://docs.python.org/3/library/functions.html#breakpoint
>
> At which po
On Mon, Mar 29, 2021 at 7:16 PM Alexey wrote:
>
> Problem. Before executing, my interpreter process weighs ~100Mb, after first
> run memory increases up to 500Mb
> and after second run it weighs 1Gb. If I will continue to run this class,
> memory wont increase, so I think
> it's not a memory lea
Since everyone is talking about vague OS memory use and not at all about
working set size of Python objects, let me ...
On 29Mar2021 03:12, Alexey wrote:
>I'm experiencing problems with memory consumption.
>
>I have a class which is doing ETL job. What`s happening inside:
> -
On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all the memory I have (16GB) on
On 30/03/2021 16:50, Chris Angelico wrote:
>> A 1GB process on modern computers is hardly a big problem?
>> Most machines have 4G and many have 16G or even 32G
>> nowadays.
>>
>
> Desktop systems maybe, but if you rent yourself a worker box, it might
> not have anything like that much. Especially
On Wed, Mar 31, 2021 at 2:44 AM Alan Gauld via Python-list
wrote:
>
> On 29/03/2021 11:12, Alexey wrote:
> > Hello everyone!
> > I'm experiencing problems with memory consumption.
> >
>
> The first thing you really need to tell us is which
> OS you are
Have you tried to identify where in your code the surprising memory allocations
are made?
You could "bisect search" by adding breakpoints:
https://docs.python.org/3/library/functions.html#breakpoint
At which point does the problem start manifesting itself?
--
https://mail.python.org/mailman/lis
On 29/03/2021 11:12, Alexey wrote:
> Hello everyone!
> I'm experiencing problems with memory consumption.
>
The first thing you really need to tell us is which
OS you are using? Memory management varies wildly
depending on OS. Even different flavours of *nix
do it differently.
Ho
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all the memory I have (16GB) on my machine and
> will crash as soon
> as system ge
понедельник, 29 марта 2021 г. в 19:56:52 UTC+3, Stestagg:
> > > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > > workers=2x available memory in GB), do they all still end up with 1gb
> > > usage? or do you get any other memory-related issues running this?
> > Yes. They wi
ry management via the envvar
> `MALLOC_ARENA_MAX` to use a common memory pool (called "arena")
> for all threads.
> It is known that this can drastically reduce memory consumption
> in multi thread systems.
Tried with this variable. No luck. Thanks anyway.
--
https://mail.python.org/mailman/listinfo/python-list
> > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > workers=2x available memory in GB), do they all still end up with 1gb
> > usage? or do you get any other memory-related issues running this?
> Yes. They will consume 1Gb each. It doesn't matter how many workers I
> have,
> t
roduction I have 8 workers,
> so in idle they will hold 8Gb.
Depending on your system (this works for `glibc` systems),
you can instruct the memory management via the envvar
`MALLOC_ARENA_MAX` to use a common memory pool (called "arena")
for all threads.
It is known that this can dra
понедельник, 29 марта 2021 г. в 17:19:02 UTC+3, Stestagg:
> On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> Some questions here to help understand more:
>
> 1. Do you have any actual problems caused by running 8 celery workers
> (beyond high memory reports)? What are they?
No. Everything work
On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> > It looks like the problem is on celery.
> > The mentioned issue is still open, so not sure if it was corrected.
> >
> > https://manhtai.github.io/posts/memory-leak-in-celery/
>
> As I me
понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> It looks like the problem is on celery.
> The mentioned issue is still open, so not sure if it was corrected.
>
> https://manhtai.github.io/posts/memory-leak-in-celery/
As I mentioned in my first message, I tried to run
this task(cla
It looks like the problem is on celery.
The mentioned issue is still open, so not sure if it was corrected.
https://manhtai.github.io/posts/memory-leak-in-celery/
Julio
El lun, 29 de mar. de 2021 a la(s) 08:31, Alexey ([email protected])
escribió:
> Hello Lars!
> Thanks for your interest.
Hello Lars!
Thanks for your interest.
The problem appears when all celery workers
require 1Gb of RAM each in idle state. They
hold this memory constantly and when they do
something useful, they grab more memory. I
think 8Gb+ in idle state is quite a lot for my
app.
> Did it crash your system or p
e?
I know that there can be (good) reasons to care, but as long as your
tasks run fine, without clogging your system, in my opinion there might
be nothing to worry about.
Cheers
Lars
Am 29.03.21 um 12:12 schrieb Alexey:
> Hello everyone!
> I'm experiencing problems with memory consumptio
Hello everyone!
I'm experiencing problems with memory consumption.
I have a class which is doing ETL job. What`s happening inside:
- fetching existing objects from DB via SQLAchemy
- iterate over raw data
- create new/update existing objects
- commit changes
Before processing data I c
On Fri, Oct 6, 2017 at 8:05 PM, D'Arcy Cain wrote:
> On 10/05/2017 05:42 PM, Fetchinson . via Python-list wrote:
>>
>> On 10/5/17, Chris Angelico wrote:
>>>
>>> On Fri, Oct 6, 2017 at 8:06 AM, Fetchinson . via Python-list
>>> wrote:
import mystuff
mystuff.some_more_expen
On 10/05/2017 05:42 PM, Fetchinson . via Python-list wrote:
On 10/5/17, Chris Angelico wrote:
On Fri, Oct 6, 2017 at 8:06 AM, Fetchinson . via Python-list
wrote:
import mystuff
mystuff.some_more_expensive_stuff( x )
del mystuff
del x
You're not actually deleting anything.
On 6 October 2017 at 06:51, Chris Angelico wrote:
> Cloud computing is the answer.
>
> If you don't believe me, just watch the sky for a while - new clouds
> get added without the sky turning off and on again.
The sky reboots every 24 hours, and the maintenance window's about
8-12 hours. Not exac
Chris Angelico wrote:
> On Fri, Oct 6, 2017 at 4:14 PM, Gregory Ewing
> wrote:
>> Steve D'Aprano wrote:
>>>
>>> Plus the downtime and labour needed to install the memory, if the
>>> computer will even take it.
>>
>>
>> Obviously we need an architecture that supports hot-swappable
>> robot-install
On Fri, 6 Oct 2017 04:51 pm, Chris Angelico wrote:
> On Fri, Oct 6, 2017 at 4:14 PM, Gregory Ewing
> wrote:
>> Steve D'Aprano wrote:
>>>
>>> Plus the downtime and labour needed to install the memory, if the computer
>>> will even take it.
>>
>>
>> Obviously we need an architecture that supports h
On Fri, Oct 6, 2017 at 4:14 PM, Gregory Ewing
wrote:
> Steve D'Aprano wrote:
>>
>> Plus the downtime and labour needed to install the memory, if the computer
>> will even take it.
>
>
> Obviously we need an architecture that supports hot-swappable
> robot-installable RAM.
>
Cloud computing is the
Steve D'Aprano wrote:
Plus the downtime and labour needed to install the memory, if the computer
will even take it.
Obviously we need an architecture that supports hot-swappable
robot-installable RAM.
--
Greg
--
https://mail.python.org/mailman/listinfo/python-list
mory. As you'll
> see I tried to make every attempt at removing everything at the end of
> each cycle so that memory consumption doesn't grow as the for loop
> progresses, but it still does.
>
> import os
>
> for f in os.listdir( '.' ):
>
&
akes about 5-10 MB of memory. As you'll
> see I tried to make every attempt at removing everything at the end of
> each cycle so that memory consumption doesn't grow as the for loop
> progresses, but it still does.
>
> import os
>
> for f in os.listdir( '.' ):
gt; see I tried to make every attempt at removing everything at the end of
> each cycle so that memory consumption doesn't grow as the for loop
> progresses, but it still does.
How do you know memory consumption is still growing?
I'm not saying it isn't, but knowing what the sy
memory. As you'll
>>see I tried to make every attempt at removing everything at the end of
>>each cycle so that memory consumption doesn't grow as the for loop
>>progresses, but it still does.
>
> "2x 8GB DIMM DDR3-1600" cost $95.99 according to a web pa
d to make every attempt at removing everything at the end of
each cycle so that memory consumption doesn't grow as the for loop
progresses, but it still does.
"2x 8GB DIMM DDR3-1600" cost $95.99 according to a web page.
This might be in the order of magnitude of the hourly rate
pt at removing everything at the end of
each cycle so that memory consumption doesn't grow as the for loop
progresses, but it still does.
import os
for f in os.listdir( '.' ):
x = [ ]
for ( i, line ) in enumerate( open( f ) ):
import mystuff
x.append( mys
>> involved and each operation takes about 5-10 MB of memory. As you'll
>> see I tried to make every attempt at removing everything at the end of
>> each cycle so that memory consumption doesn't grow as the for loop
>> progresses, but it still does.
>>
>>
As you'll
> see I tried to make every attempt at removing everything at the end of
> each cycle so that memory consumption doesn't grow as the for loop
> progresses, but it still does.
>
> import os
>
> for f in os.listdir( '.' ):
>
ng at the end of
each cycle so that memory consumption doesn't grow as the for loop
progresses, but it still does.
import os
for f in os.listdir( '.' ):
x = [ ]
for ( i, line ) in enumerate( open( f ) ):
import mystuff
x.append( mystuff.e
cycle so that memory consumption doesn't grow as the for loop
progresses, but it still does.
import os
for f in os.listdir( '.' ):
x = [ ]
for ( i, line ) in enumerate( open( f ) ):
import mystuff
x.append( mystuff.expensive_stuff( line ) )
del
In article ,
Wolodja Wentland wrote:
>
>I have a problem with the memory consumption of multiprocessing.Pool()'s
>worker processes. I have a parent process that has to handle big data
>structures and would like to use a pool of processes for computations.
>
>The prob
Hi all,
I have a problem with the memory consumption of multiprocessing.Pool()'s
worker processes. I have a parent process that has to handle big data
structures and would like to use a pool of processes for computations.
The problem is, that all worker processes have the same memory
requir
Carl Banks writes:
> On Apr 9, 11:23 pm, Hrvoje Niksic wrote:
>> [email protected] (Aahz) writes:
>> > BTW, note that if you're using Python 2.x, range(100) will cause
>> > a "leak" because ints are never freed. Instead, use xrange().
>>
>> Note that using xrange() won't help with that p
On Apr 9, 11:23 pm, Hrvoje Niksic wrote:
> [email protected] (Aahz) writes:
> > BTW, note that if you're using Python 2.x, range(100) will cause
> > a "leak" because ints are never freed. Instead, use xrange().
>
> Note that using xrange() won't help with that particular problem.
I think
[email protected] (Aahz) writes:
> BTW, note that if you're using Python 2.x, range(100) will cause
> a "leak" because ints are never freed. Instead, use xrange().
Note that using xrange() won't help with that particular problem.
--
http://mail.python.org/mailman/listinfo/python-list
In article <[email protected]>,
k3xji wrote:
>
>When I run the following function, I seem to have a mem leak, a 20 mb
>of memory
>is allocated and is not freed. Here is the code I run:
>
import esauth
for i in range(100):
>
>... ss
On Apr 7, 2:10 pm, John Machin wrote:
> On Apr 7, 9:19 pm, MRAB wrote:
>
>
>
> > k3xji wrote:
> > > Interestaing I changed malloc()/free() usage with PyMem_xx APIs and
> > > the problem resolved. However, I really cannot understand why the
> > > first version does not work. Here is the latest cod
Carl Banks gmail.com> writes:
> However, Python apparently does leak a reference if passed a Unicode
> object; PyArg_ParseTuple automatically creates an encoded string but
> never decrefs it. (That might be necessary evil to preserve
> compatibility, though. PyString_AS_STRING does it too.)
Uni
On Apr 7, 9:19 pm, MRAB wrote:
> k3xji wrote:
> > Interestaing I changed malloc()/free() usage with PyMem_xx APIs and
> > the problem resolved. However, I really cannot understand why the
> > first version does not work. Here is the latest code that has no
> > problems at all:
>
> > static PyObjec
k3xji wrote:
Interestaing I changed malloc()/free() usage with PyMem_xx APIs and
the problem resolved. However, I really cannot understand why the
first version does not work. Here is the latest code that has no
problems at all:
static PyObject *
penc(PyObject *self, PyObject *args)
{
Py
Interestaing I changed malloc()/free() usage with PyMem_xx APIs and
the problem resolved. However, I really cannot understand why the
first version does not work. Here is the latest code that has no
problems at all:
static PyObject *
penc(PyObject *self, PyObject *args)
{
PyObject * result
On Apr 7, 12:01 am, k3xji wrote:
> When I run the following function, I seem to have a mem leak, a 20 mb
> of memory
> is allocated and is not freed. Here is the code I run:
>
> >>> import esauth
> >>> for i in range(100):
>
> ... ss = esauth.penc('sumer')
> ...
>
> >>> for i in range(1000
When I run the following function, I seem to have a mem leak, a 20 mb
of memory
is allocated and is not freed. Here is the code I run:
>>> import esauth
>>> for i in range(100):
... ss = esauth.penc('sumer')
...
>>> for i in range(100):
... ss = esauth.penc('sumer')
...
And here
#x27;t fit in the allocated
> memory, a number of old messages would be discarded.
>
> As the server needs to have room for other tasks, I'd like to limit
> the overall memory consumption to a certain amount.
Since your data is all in one place, why not write a dict or list wrapper
which
ant that all received mail would be kept in RAM and not cached
> out to disk. If a new message comes in that can't fit in the allocated
> memory, a number of old messages would be discarded.
>
> As the server needs to have room for other tasks, I'd like to limit
> the ov
es in that can't fit in the allocated
memory, a number of old messages would be discarded.
As the server needs to have room for other tasks, I'd like to limit
the overall memory consumption to a certain amount.
Is this possible? How would I go about implementing it? By imposing
"u
>From pytrix:
http://www.american.edu/econ/pytrix/pytrix.py
def permutationsg(lst):
'''Return generator of all permutations of a list.
'''
if len(lst)>1:
for i in range(len(lst)):
for x in permutationsg(lst[:i]+lst[i+1:]):
yield [lst[i]]+x
else:
On Tue, Dec 19, 2006 at 03:14:51PM +0100, Christian Meesters wrote:
> Hi,
>
> I'd like to hack a function which returns all possible permutations as lists
> (or tuples) of two from a given list. So far, I came up with this solution,
> but it turned out to be too slow for the given problem, because
"Christian Meesters" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi,
>
> I'd like to hack a function which returns all possible permutations as
> lists
> (or tuples) of two from a given list. So far, I came up with this
> solution,
> but it turned out to be too slow for the giv
Gerard Flanagan wrote:
> No claims with respect to speed, but the kslice function here:
>
> http://gflanagan.net/site/python/utils/sequtils/
>
> will give the 'k-subsets' which then need to be permuted -
> alternatively Google.
Maybe the function below could then do these permutations.
Ant
Thanks Simon & Gerard!
I will check those exampels out.
Christian
PS Of course, I did google - but apparently not creative enough.
--
http://mail.python.org/mailman/listinfo/python-list
Christian Meesters wrote:
> Hi,
>
> I'd like to hack a function which returns all possible permutations as lists
> (or tuples) of two from a given list. So far, I came up with this solution,
> but it turned out to be too slow for the given problem, because the list
> passed ("atomlist") can be so
On 12/19/06, Christian Meesters <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'd like to hack a function which returns all possible permutations as lists
> (or tuples) of two from a given list. So far, I came up with this solution,
> but it turned out to be too slow for the given problem, because the list
Hi,
I'd like to hack a function which returns all possible permutations as lists
(or tuples) of two from a given list. So far, I came up with this solution,
but it turned out to be too slow for the given problem, because the list
passed ("atomlist") can be some 1e5 items long:
def permute(at
Thank you guys for your replies.
I've just realized that there was no memory leak and it was just my
mistake to think so. I've almost disappointed with my favorite
programming language before addressing the problem. Actually the app
consume as much memory as it should and I've just miscalculated
Bryan Olson wrote:
> In Python 2.5, each thread will be allocated
>
> thread.stack_size()
>
> bytes of stack address space. Note that address space is
> not physical memory, nor even virtual memory. On modern
> operating systems, the memory gets allocated as needed,
> and 150 threads is not
Gabriel Genellina wrote:
> At Sunday 22/10/2006 20:31, Roman Petrichev wrote:
>
>> I've just faced with very nasty memory consumption problem.
>> I have a multythreaded app with 150 threads which use the only and the
>> same function - through urllib2 it just gets th
Dennis Lee Bieber wrote:
> How much stack space gets allocated for 150 threads?
In Python 2.5, each thread will be allocated
thread.stack_size()
bytes of stack address space. Note that address space is
not physical memory, nor even virtual memory. On modern
operating systems, the memo
Roman Petrichev wrote:
> Hi folks.
> I've just faced with very nasty memory consumption problem.
> I have a multythreaded app with 150 threads
[...]
>
> The test app code:
>
>
> Q = Queue.Queue()
> for i in rez: #rez length - 5000
> Q.put(i)
>
Roman Petrichev:
> Dennis Lee Bieber wrote:
>> How much stack space gets allocated for 150 threads?
> Actually I don't know. How can I get to know this?
On Linux, each thread will often be allocated 10 megabytes of stack.
This can be viewed and altered with the ulimit command.
Neil
-
At Sunday 22/10/2006 20:31, Roman Petrichev wrote:
I've just faced with very nasty memory consumption problem.
I have a multythreaded app with 150 threads which use the only and the
same function - through urllib2 it just gets the web page's html code
and assigns it to local variab
Roman Petrichev wrote:
> try:
> url = Q.get()
> except Queue.Empty:
> break
This code will never raise the Queue.Empty exception. Only a
non-blocking get does:
url = Q.get(block=False)
As mentioned before you should post working code if you expect peo
Dennis Lee Bieber wrote:
> On Mon, 23 Oct 2006 03:31:28 +0400, Roman Petrichev <[EMAIL PROTECTED]>
> declaimed the following in comp.lang.python:
>
>> Hi folks.
>> I've just faced with very nasty memory consumption problem.
>> I have a multythreaded app with
Hi folks.
I've just faced with very nasty memory consumption problem.
I have a multythreaded app with 150 threads which use the only and the
same function - through urllib2 it just gets the web page's html code
and assigns it to local variable. On the next turn the variable is
overr
In message <[EMAIL PROTECTED]>, Bo Peng <[EMAIL PROTECTED]> writes
>The problem is not that difficult to find, but it was 2am in the morning and
>I was misled by the different behavior of pyFun1 and pyFun2.
Don't know if you were using Windows, but if you were then Python Memory
Validator would ha
Bo Peng wrote:
>> Sorry, are you saying that the code you posted does NOT have a memory
>> leak, but you want us to find the memory leak in your real code sight
>> unseen?
Problem found. It is hidden in a utility function that converts the
return value to a double. The refcnt of the middle res
Steven D'Aprano wrote:
> Sorry, are you saying that the code you posted does NOT have a memory
> leak, but you want us to find the memory leak in your real code sight
> unseen?
valgrind does not detect anything so it does not look like memory leak.
I just can not figure out why val[0], readonl
1 - 100 of 102 matches
Mail list logo