Re: Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
" > during merkle tree calculation. > > 2017-02-26 20:41 GMT+01:00 Seth Edwards : > >> Hello, >> >> We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This >> caused a large proliferation of sstables and compactions. There is likely a >

Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
Hello, We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This caused a large proliferation of sstables and compactions. There is likely a lot of entropy in this keyspace. I am trying to better understand why this is. I've also read that you may not want to run repairs on short

Re: Question about compaction strategy changes

2016-10-24 Thread Seth Edwards
may want to consider > dropping concurrent compactors down so fewer compaction tasks run at the > same time. That will translate proportionally to the amount of extra disk > you have consumed by compaction in a TWCS setting. > > > > > > > > *From: *Seth Edwards

Re: Question about compaction strategy changes

2016-10-23 Thread Seth Edwards
jump into the thousands and we and up being short of a few hundred GB of disk space. On Sun, Oct 23, 2016 at 5:49 PM, kurt Greaves wrote: > > On 22 October 2016 at 03:37, Seth Edwards wrote: > >> We're using TWCS and we notice that if we make changes to the options to >&g

Question about compaction strategy changes

2016-10-21 Thread Seth Edwards
Hello! We're using TWCS and we notice that if we make changes to the options to the window unit or size, it seems to implicitly start recompacting all sstables. Is this indeed the case and more importantly, does the same happen if we were to adjust the gr_grace_seconds for this table? Thanks!

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
; > Also to maintain your read throughput during this whole thing, double > check the EBS volumes read_ahead_kb setting on the block volume and reduce > it to something sane like 0 or 16. > > > > On Mon, 17 Oct 2016 at 13:42 Seth Edwards wrote: > >> @Ben >> >&g

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
ladimir Yudovin, >> >> >> *Winguzone >> <https://urldefense.proofpoint.com/v2/url?u=https-3A__winguzone.com-3Ffrom-3Dlist&d=DQMFaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow&m=ixOxpX-xpw1dJZNpaMT3mepToWX8gzmsVa

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
t; > > On Monday, October 17, 2016, Seth Edwards wrote: > >> We're running 2.0.16. We're migrating to a new data model but we've had >> an unexpected increase in write traffic that has caused us some capacity >> issues when we encounter compactions. Our old dat

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
use new disk to distribute both >> new and existing data. >> >> Best regards, Vladimir Yudovin, >> >> >> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on >> Azure and SoftLayer.Launch your cluster in minutes.* >> >

Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
We have a few nodes that are running out of disk capacity at the moment and instead of adding more nodes to the cluster, we would like to add another disk to the server and add it to the list of data directories. My question, is, will Cassandra use the new disk for compactions on sstables that alre

Re: Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I see what you are saying. So basically take whatever existing token I have and divide it by 2, give or take a couple of tokens? On Mon, Feb 9, 2015 at 5:17 PM, Robert Coli wrote: > On Mon, Feb 9, 2015 at 4:59 PM, Seth Edwards wrote: > >> We are choosing to double our cluster

Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I am on Cassandra 1.2.19 and I am following the documentation for adding existing nodes to a cluster . We are choosing to double our cluster from six to twelve. I ran the token generator. Based on what I r