ut it should be faster
> because
> much less data is transferred (and the latency can be hidden by
> concurrency).
>
> Martin
>
> --
> *From:* Michal Augustýn [mailto:augustyn.mic...@gmail.com]
> *Sent:* Monday, September 06, 2010 10:26 AM
>
&
ncurrency).
Martin
From: Michal Augustýn [mailto:augustyn.mic...@gmail.com]
Sent: Monday, September 06, 2010 10:26 AM
To: user@cassandra.apache.org
Subject: Re: skip + limit support in GetSlice
Hi Mike,
yes, I
Hi Mike,
yes, I read the PDF to the finish. Twice. As I wrote, my application is not
accessed by users, it's accessed by other applications that can access pages
randomly.
So when some application wants to get page 51235 (so skip is 5123500, limit
is 100) then I have to:
1) GetSlice(from: "", to
Hi Michal,
Did you read the PDF Stu sent over, start to finish? There are several
different approaches described there.
With Cassandra, what we found works best for pagination:
* Keep a separate 'total_records' count and increment/decrement it on
every insert/delete
* When getting slices, pa
I know that "Prev/Next" is good solution for web applications. But when I
want to access data from another application or when I want to access pages
randomly...
I don't know the internal structure of memtables etc., so I don't know if
columns in row are indexable. If now, then I just want to tran
Cassandra supports the recommended approach from:
http://www.percona.com/ppc2009/PPC2009_mysql_pagination.pdf
For large numbers of items, skip + limit is extremely inefficent.
-Original Message-
From: "Michal Augustýn"
Sent: Sunday, September 5, 2010 5:39am
To: user@cassandra.apache.org