I remember once that happening to me. The SStables were way beyond the
limit (32 default) but the compaction were still not starting. All I did
was "nodetool enableautocompaction keyspace table" and the compaction
immediately started and count of SSTables were down to normal level. It was
little su
Thanks Rob.
Anyway, Ideally for a new node to join with ~50GB data of it's share, it
should be done in couple of minutes or hour tops, right?
On Fri, Mar 20, 2015 at 6:07 PM, Robert Coli wrote:
> On Fri, Mar 20, 2015 at 4:08 PM, Pranay Agarwal
> wrote:
>
>> Also, the ver
at 3:57 PM, Pranay Agarwal
> wrote:
>
>> I guess now, I have decide it's better to upgrade to 2.1.6+ or downgrade
>> to stable release and safe way to do that.
>>
>
> You can't downgrade across major versions, you'd have to read out
> everything from
Thanks a lot Rob.
I guess now, I have decide it's better to upgrade to 2.1.6+ or downgrade to
stable release and safe way to do that.
On Fri, Mar 20, 2015 at 3:35 PM, Robert Coli wrote:
> On Thu, Mar 19, 2015 at 6:02 PM, Pranay Agarwal
> wrote:
>
>> What do you mean
No. as shown in the histograms, 99% of reads are using 2 or less number of
tables. What's typical usually? Can anyone share from experience?
On Fri, Mar 20, 2015 at 1:12 PM, Duncan Sands
wrote:
> On 20/03/15 19:34, Pranay Agarwal wrote:
>
>> The cluster is processing somethi
Also, typically how long does it take for a node to join? I have in total 1
TB of data in 15 nodes cassandra cluster.
On Fri, Mar 20, 2015 at 10:53 AM, Pranay Agarwal
wrote:
> Thank Rahul, you are right. Unless the node complete joins the ring, there
> is no data dependency on them.
&g
okup.
>
> On Fri, Mar 20, 2015 at 10:57 PM, Pranay Agarwal > wrote:
>
>> Hi All.
>>
>>
>> I am using 15 nodes cassandra cluster(m3.2xlarge) with provisioned IOPS
>> disks (4000). I can see around 12k reads/sec ops on the cassandra cluster.
>>
>
Hi All.
I am using 15 nodes cassandra cluster(m3.2xlarge) with provisioned IOPS
disks (4000). I can see around 12k reads/sec ops on the cassandra cluster.
But I see around *~3500 read IOPS* on each of the cassandra nodes. Is that
normal?
I am using LevelledCompaction and I can see in the histog
ar 19, 2015, at 9:16 PM, Pranay Agarwal
> wrote:
>
> Also, the new nodes (3 of them, in *UJ state*) are showing some data size
> (~10g). Is there any data loss chances with stopping the cassandra on them?
>
> On Thu, Mar 19, 2015 at 6:02 PM, Pranay Agarwal
> wrote:
>
>>
Also, the new nodes (3 of them, in *UJ state*) are showing some data size
(~10g). Is there any data loss chances with stopping the cassandra on them?
On Thu, Mar 19, 2015 at 6:02 PM, Pranay Agarwal
wrote:
> Thanks Rob, You are right. I am using ReleaseVersion: 2.1.0
>
> What do yo
Thanks Rob, You are right. I am using ReleaseVersion: 2.1.0
What do you mean by point 3? Also, by doing one at a time, does it mean
wait till nodetool status of the new node is UN from UJ?
On Thu, Mar 19, 2015 at 5:44 PM, Robert Coli wrote:
> On Thu, Mar 19, 2015 at 5:32 PM, Pranay Agar
Hi,
I have 14 nodes cassandra cluster, each node as around 50gb of data. I
added 3 new nodes to the cluster and I can see the status as *UJ *for the
new nodes. They have been in that for almost a day now and their data size
seems to be same as well. There is almost no CPU or disk usage either on
t
Hi All,
I used sstableloader to export data from first cassandra cluster (RF 3) to
another cluster with RF 1. Afther all the tables were copied and second
cluster was working fine I decided to run node repair on the second cluster
as regular operation. *This repair cause the data size on the secon
thout having to backup the token ring range but just the data backup?
-Pranay
On Mon, Dec 29, 2014 at 1:58 PM, Robert Coli wrote:
> On Mon, Dec 29, 2014 at 1:40 PM, Pranay Agarwal
> wrote:
>
>> I want to understand what is the best way to increase/change the replica
>> facto
ng copies of data around and not just
> doing normal merkel tree work.
>
> Restoring from backup to a new cluster (including how to handle token
> ranges) is discussed in detail here
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_snapshot_restore_new_c
Hi All,
I have 20 nodes cassandra cluster with 500gb of data and replication factor
of 1. I increased the replication factor to 3 and ran nodetool repair on
each node one by one as the docs says. But it takes hours for 1 node to
finish repair. Is that normal or am I doing something wrong?
Also,
16 matches
Mail list logo