Yes, but still you need to run 'nodetool cleanup' from time to time to make
sure all tombstones are deleted.
On Fri, May 16, 2014 at 10:11 AM, Dimetrio wrote:
> Does cassandra delete tombstones during simple LCS compaction or I should
> use
> node tool repair?
>
> Thanks.
>
>
>
> --
> View this
All you need to do is to decrease the replication factor of DC1 to 0, and
then decommission the nodes one by one,
I've tried this before and it worked with no issues.
Thanks,
On Tue, Jul 23, 2013 at 10:32 PM, Lanny Ripple wrote:
> Hi,
>
> We have a multi-dc setup using DC1:2, DC2:2. We want to
.1. However, for Cassandra 1.0, the default is 1.0 if you use CLI or any
> Thrift client, such as Hector or pycassa, and is 0.1 if you use CQL.
> >
> >
> >
> > On Sun, Jul 21, 2013 at 10:26 AM, Omar Shibli
> wrote:
> > One more thing, I'm doing a lot
One more thing, I'm doing a lot of key slice read requests, is that
supposed to change anything?
On Sun, Jul 21, 2013 at 8:21 PM, Omar Shibli wrote:
> I'm seeing a lot of inter-dc read requests, although I've followed
> DataStax guidelines for multi-dc deployment
> htt
I'm seeing a lot of inter-dc read requests, although I've followed DataStax
guidelines for multi-dc deployment
http://www.datastax.com/dev/blog/deploying-cassandra-across-multiple-data-centers
Here is my setup:
2 data centers within the same region (AWS)
Targeting DC, RP 3, 6 nodes
Analytic DC, RP
I've a running cluster (3 nodes) with release version 1.2.0-beta2, and I've
successfully added/removed nodes to this cluster in the past.
I'm trying to add a new node to the cluster with release version 1.2 rc1,
but it seems like other peers are refusing to connect, these are the
exceptions:
INFO