On 04/11/2011 03:05, Anssi Kääriäinen wrote:
On Nov 4, 3:38 am, Marco Paolini wrote:
Postgresql:
.chunked(): 26716.0kB
.iterator(): 46652.0kB
what if you use .chunked().iterator() ?
Quick test shows that the actual memory used by the queryset is around
1.2Mb. Using smaller fetch size than t
On Nov 4, 3:38 am, Marco Paolini wrote:
> > Postgresql:
> > .chunked(): 26716.0kB
> > .iterator(): 46652.0kB
>
> what if you use .chunked().iterator() ?
Quick test shows that the actual memory used by the queryset is around
1.2Mb. Using smaller fetch size than the default 2000 would result in
les
On Nov 4, 3:29 am, Marco Paolini wrote:
> where/when do we close() cursors, or we rely on cursor __del__()
> implementation?
I guess we should rely on it going away when it happens to go away
(that is, __del__ way).
>
> postgres named cursors can't be used in autocommit mode [1]
I don't know if
On Nov 4, 3:38 am, Marco Paolini wrote:
> what if you use .chunked().iterator() ?
You can't. .chunked() returns a generator. Note that the memory usage
is total memory usage for the process, not for the query. The memory
usage for the query is probably just a small part of the total memory
usage.
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows differ
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows differ
On Nov 4, 1:20 am, Marco Paolini wrote:
> time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows different database backends to be
able to decide how
...
The SQLite3 shared cache mode seems to suffer from the same problem
than mysql:
"""
At any one time, a single table may have any number of active read-
locks or a single active write lock. To read data a table, a
connection must first obtain a read-lock. To write to a table, a
connection mus
On Nov 3, 11:09 pm, Marco Paolini wrote:
> > Now, calling the .iterator() directly is not safe on SQLite3. If you
> > do updates to objects not seen by the iterator yet, you will see those
> > changes. On MySQL, all the results are fetched into Python in one go,
> > and the only saving is from n
Now, calling the .iterator() directly is not safe on SQLite3. If you
do updates to objects not seen by the iterator yet, you will see those
changes. On MySQL, all the results are fetched into Python in one go,
and the only saving is from not populating the _results_cache. I guess
Oracle will just
On Nov 3, 1:06 am, I wrote:
> I did a little testing. It seems you can get the behavior you want if you
> just do this in PostgreSQL:
> for obj in Model.objects.all().iterator(): # Note the extra .iterator()
> # handle object here.
> I would sure like a verification to this test, I am tired
On Thu, Nov 3, 2011 at 2:14 AM, Javier Guerra Giraldez
wrote:
> this seems to be the case with MyISAM tables; on the InnoDB engine
> docs, it says that SELECT statements don't set any lock, since it
> reads from a snapshot of the table.
>
> on MyISAM, there are (clumsy) workarounds by forcing the
12 matches
Mail list logo