wrote:
> On Fri, Oct 25, 2013 at 1:10 PM, Jasdeep Hundal wrote:
>
>>
>> After performing a large set of deletes on our cluster, a few hundred
>> gigabytes work (essentially cleaning out nearly all old data), we noticed
>> that nodetool reported about the same lo
Does anyone have a good explanation or pointers to docs for understanding
how Cassandra decides to remove SSTables from disk?
After performing a large set of deletes on our cluster, a few hundred
gigabytes work (essentially cleaning out nearly all old data), we noticed
that nodetool reported about
Decommission moves the data from the node being decommissioned to the other
nodes that will now have ownership over the data.
Removenode will stream the data that node is responsible for from other
replicas, and AFAIK is generally used when a node is offline and cannot be
brought back up.
I think
gt;
>
> This could be that IO is not keeping up, it's unlikely to be a switch lock
> issue if you only have a 4 CF's. Also have you checked for GC messages in
> the C* logs ?
>
> Cheers
>
>
> ---------
> Aaron Morton
> Freelance Cassandra Consultan
/aaronmorton/apachecon-nafeb2013
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 16/03/2013, at 12:21 PM, Jasdeep Hundal wrote:
>
> I've got a coupl
I've got a couple of questions related issues I'm encountering using
Cassandra under a heavy write load:
1. With a ConsistencyLevel of quorum, does
FBUtilities.waitForFutures() wait for read repair to complete before
returning?
2. When read repair applies a mutation, it needs to obtain a lock for