Re: Large number of tiny sstables flushed constantly

2021-08-16 Thread Jiayong Sun
Thank you so much Bowen for your advice on this. Really appreciate it! Thanks,Jiayong Sun On Monday, August 16, 2021, 11:56:39 AM PDT, Bowen Song wrote: Hi Jiayong, You will need to reduce the num_tokens on all existing nodes in the cluster in order to "fix" the re

Re: Large number of tiny sstables flushed constantly

2021-08-16 Thread Jiayong Sun
. Actually we used to add a ring with num_token: 4 >in this cluster but later remove it due to some other issue. We have started >using num_token: 16 as standard for any new clusters. Thanks,Jiayong SunOn Monday, August 16, 2021, 02:46:24 AM PDT, Bowen Song wrote: Hello Jia

Re: Large number of tiny sstables flushed constantly

2021-08-16 Thread Jiayong Sun
ow old hints could be safely removed without impacting data consistency? I think this question may be depending on many factors but I was wondering if there is any kind of rule of thumb? Thanks,Jiayong SunOn Sunday, August 15, 2021, 05:58:11 AM PDT, Bowen Song wrote: Hi Jiayong,

Re: Large number of tiny sstables flushed constantly

2021-08-13 Thread Jiayong Sun
gt;50GB per node) are not uncommon in this cluster especially since this issue has been occurring causing many node lost gossip. We have to set up a daily cron job to clear the older hints from disk, but not sure if this would hurt data inconsistency among nodes and DCs. Thoughts? Thanks,Jiayon

Re: Large number of tiny sstables flushed constantly

2021-08-13 Thread Jiayong Sun
ted with the repair.I can see messages of flushing BOTH major app tables AND system.sstable_activity table, but number of sstables is much higher for the app tables. Thanks,Jiayong Sun On Friday, August 13, 2021, 01:43:06 PM PDT, Jeff Jirsa wrote: A very large cluster using vnodes

Re: Large number of tiny sstables flushed constantly

2021-08-13 Thread Jiayong Sun
g/deleting of memtable and sstables (I listed a few examples messages at beginning of this email thread).Thanks for all your thoughts and I really appreciate. Thanks,Jiayong Sun  On Friday, August 13, 2021, 01:36:21 PM PDT, Bowen Song wrote: Hi Jiayong, That doesn't r

Re: Large number of tiny sstables flushed constantly

2021-08-13 Thread Jiayong Sun
> 100 MB) partitions? Those are the 3 things mentioned in the SO question. I'm trying to find the connections between the issue you are experiencing and the issue described in the SO question. Cheers, Bowen On 13/08/2021 01:36, Jiayong Sun wrote: Hello Bowen, Thank

Re: Large number of tiny sstables flushed constantly

2021-08-12 Thread Jiayong Sun
like workload related, and we are seeking feedback here for any other parameters in the yaml that we could tune for this. Thanks again,Jiayong Sun On Thursday, August 12, 2021, 04:55:51 AM PDT, Bowen Song wrote: Hello Jiayong, Using multiple disks in a RAID0 for Cassandra data dire

Re: Large number of tiny sstables flushed constantly

2021-08-11 Thread Jiayong Sun
e_activity') to free up room. Used total: 0.06/1.00, live: 0.00/0.00, flushing: 0.02/0.29, this: 0.00/0.00 Thanks,Jiayong SunOn Wednesday, August 11, 2021, 12:06:27 AM PDT, Erick Ramirez wrote: 4 flush writers isn't bad since the default is 2. It doesn't make a difference

Re: Large number of tiny sstables flushed constantly

2021-08-11 Thread Jiayong Sun
Hi Erick, Thanks for your response.Actually we did not set the memtable_cleanup_threshold in the cassandra.yaml.However, we have memtable_flush_writers: 4 defined in the yaml, and VM node has 16-core.Any advice for this param's value? Thanks again.Jiayong SunOn Tuesday, August 10, 2021, 09:2

Large number of tiny sstables flushed constantly

2021-08-10 Thread Jiayong Sun
nodes.When restarting Cassandra on these nodes, it would take a long long time, like 1 or 2 hours to DRAIN down or STARTING up, and there are many of the above "Deleting sstable" message logged, which looks like cleanup process to clear those tiny sstables. Any idea or advice? Thanks,Jiayong Sun