Thank you so much Bowen for your advice on this. Really appreciate it!
Thanks,Jiayong Sun
On Monday, August 16, 2021, 11:56:39 AM PDT, Bowen Song
wrote:
Hi Jiayong,
You will need to reduce the num_tokens on all existing nodes in the cluster in
order to "fix" the re
. Actually we used to add a ring with num_token: 4
>in this cluster but later remove it due to some other issue. We have started
>using num_token: 16 as standard for any new clusters.
Thanks,Jiayong SunOn Monday, August 16, 2021, 02:46:24 AM PDT, Bowen Song
wrote:
Hello Jia
ow old hints
could be safely removed without impacting data consistency? I think this
question may be depending on many factors but I was wondering if there is any
kind of rule of thumb?
Thanks,Jiayong SunOn Sunday, August 15, 2021, 05:58:11 AM PDT, Bowen Song
wrote:
Hi Jiayong,
gt;50GB per node) are not uncommon in this
cluster especially since this issue has been occurring causing many node lost
gossip. We have to set up a daily cron job to clear the older hints from disk,
but not sure if this would hurt data inconsistency among nodes and DCs.
Thoughts?
Thanks,Jiayon
ted with the repair.I can see messages of
flushing BOTH major app tables AND system.sstable_activity table, but number of
sstables is much higher for the app tables.
Thanks,Jiayong Sun
On Friday, August 13, 2021, 01:43:06 PM PDT, Jeff Jirsa
wrote:
A very large cluster using vnodes
g/deleting of memtable and sstables (I listed a few
examples messages at beginning of this email thread).Thanks for all your
thoughts and I really appreciate.
Thanks,Jiayong Sun
On Friday, August 13, 2021, 01:36:21 PM PDT, Bowen Song
wrote:
Hi Jiayong,
That doesn't r
> 100 MB) partitions?
Those are the 3 things mentioned in the SO question. I'm trying to find the
connections between the issue you are experiencing and the issue described in
the SO question.
Cheers,
Bowen
On 13/08/2021 01:36, Jiayong Sun wrote:
Hello Bowen,
Thank
like workload related, and we are
seeking feedback here for any other parameters in the yaml that we could tune
for this.
Thanks again,Jiayong Sun
On Thursday, August 12, 2021, 04:55:51 AM PDT, Bowen Song
wrote:
Hello Jiayong,
Using multiple disks in a RAID0 for Cassandra data dire
e_activity') to free up room. Used
total: 0.06/1.00, live: 0.00/0.00, flushing: 0.02/0.29, this: 0.00/0.00
Thanks,Jiayong SunOn Wednesday, August 11, 2021, 12:06:27 AM PDT, Erick
Ramirez wrote:
4 flush writers isn't bad since the default is 2. It doesn't make a difference
Hi Erick,
Thanks for your response.Actually we did not set the memtable_cleanup_threshold
in the cassandra.yaml.However, we have memtable_flush_writers: 4 defined in the
yaml, and VM node has 16-core.Any advice for this param's value?
Thanks again.Jiayong SunOn Tuesday, August 10, 2021, 09:2
nodes.When restarting Cassandra on these nodes,
it would take a long long time, like 1 or 2 hours to DRAIN down or STARTING up,
and there are many of the above "Deleting sstable" message logged, which looks
like cleanup process to clear those tiny sstables.
Any idea or advice?
Thanks,Jiayong Sun
11 matches
Mail list logo