Can it be that you have tons and tons of tombstoned columns in the middle
of these two? I've seen plenty of performance issues with wide
rows littered with column tombstones (you could check with dumping the
sstables...)
Just a thought...
Josep M.
On Thu, Nov 8, 2012 at 12:23 PM, André Cruz wro
We've run exactly into the same problem recently. Some specific keys in a
couple CFs accumulate a fair amount of column churn over time.
Pre Cassandra 1.x we scheduled full compactions often to purge them.
However, when we moved to 1.x but we adopted the recommended practice of
avoiding full compa
much more complex and inefficient way to handle wide rows or b)
will force users to use different, less efficient datamodels for their
data. Both seem bad propositions to me, as they wouldn't be taking
advantage of Cassandra's power, therefore diminishing its value.
Cheers,
Josep M.
Hi,
I am confused as to what is the way to specify column slices for composite
type CFs using CQL3.
I first thought that the way to do so was to use the very ugly and
unintuitive syntax of constructing the PK prefix with equalities, except
the last part of the composite type. But, now, after see
On Wed, Jan 18, 2012 at 12:44 PM, Jonathan Ellis wrote:
> On Wed, Jan 18, 2012 at 12:31 PM, Josep Blanquer
> wrote:
> > If I do a slice without a start (i.e., get me the first column)...it
> seems
> > to fly. GET("K", :count => 1 )
>
> Y
slow reads. Wouldn't this mostly imply following
some links/pointers in memory to start reading ordered columns? What is the
backing store used for Memtables when column slices are performed?
I am not sure why starting at the end (without reversing or anything)
yields much better performance.
Hi,
I've been doing some tests using wide rows recently, and I've seen some
odd performance problems that I'd like to understand.
In particular, I've seen that the time it takes for Cassandra to perform a
column slice of a single key, solely in a Memtable, seems to be very
expensive, but most im
Hi,
I am looking for an efficient way migrate a portion of the data existing in
a Cassandra cluster to another, separate Cassandra cluster. What I need is
to solve the typical live migration problem that appears in any "DB
sharding" where need to transfer "ownership" of certain rows from DB1 to
D
On Thu, Jun 23, 2011 at 8:54 AM, Peter Schuller wrote:
> > Actually, I'm afraid that's not true (unless I'm missing something). Even
> if
> > you have only 1 drive, you still need to stop writes to the disk for the
> > short time it takes the low level "drivers" to snapshot it (i.e., marking
> >
On Thu, Jun 23, 2011 at 8:02 AM, William Oberman
wrote:
> I've been doing EBS snapshots for mysql for some time now, and was using a
> similar pattern as Josep (XFS with freeze, snap, unfreeze), with the extra
> complication that I was actually using 8 EBS's in RAID-0 (and the extra
> extra compli
On Thu, Jun 23, 2011 at 7:30 AM, Peter Schuller wrote:
> > EBS volume atomicity is good. We've had tons of experience since EBS came
> > out almost 4 years ago, to back all kinds of things, including large
> DBs.
> > One important thing to have in mind though, is that EBS snapshots are
> done
>
On Thu, Jun 23, 2011 at 5:04 AM, Peter Schuller wrote:
> > 1. Is it feasible to run directly against a Cassandra data directory
> > restored from an EBS snapshot? (as opposed to nodetool snapshots restored
> > from an EBS snapshot).
>
> Assuming EBS is not buggy, including honor write barriers, i
I believe the offset value of Writes and Reads are in *micro*seconds right?
(that page talks about *milli*seconds)
Also, are any timeouts or errors reflected in those times or just successful
operations? if not, is there any JMX or other tool to keep track of them?
Josep M.
On Fri, May 6, 2011
dra server
see much of a difference?
Cheers,
Josep M.
On Mon, Apr 11, 2011 at 3:49 PM, aaron morton wrote:
> AFAIK both follow the same path internally.
>
> Aaron
>
> On 12 Apr 2011, at 06:47, Josep Blanquer wrote:
>
>> All,
>>
>> From a thrift client perspect
All,
From a thrift client perspective using Cassandra, there are currently
2 options for deleting keys/columns/subcolumns:
1- One can use the "remove" call: which only takes a column path so
you can only delete 'one thing' at a time (an entire key, an entire
supercolumn, a column or a subcolumn)
15 matches
Mail list logo