Re: High disk usage casaandra 3.11.7

2021-09-18 Thread Bowen Song
Is there any reason to not use TTL? No compaction strategy is going to cope with frequent massive deletions. In fact, queue-like data model is a Cassandra antipattern. On 17/09/2021 23:54, Abdul Patel wrote: Twcs is best for TTL not for excipilitly delete correct? On Friday, September 17, 20

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Twcs is best for TTL not for excipilitly delete correct? On Friday, September 17, 2021, Abdul Patel wrote: > 48hrs deletion is deleting older data more than 48hrs . > LCS was used as its more of an write once and read many application. > > On Friday, September 17, 2021, Bowen Song wrote: > >>

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
48hrs deletion is deleting older data more than 48hrs . LCS was used as its more of an write once and read many application. On Friday, September 17, 2021, Bowen Song wrote: > Congratulation! You've just found out the cause of it. Does all data get > deletes 48 hours after they are inserted? If

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Bowen Song
Congratulation! You've just found out the cause of it. Does all data get deletes 48 hours after they are inserted? If so, are you sure LCS is the right compaction strategy for this table? TWCS sounds like a much better fit for this purpose. On 17/09/2021 19:16, Abdul Patel wrote: Thanks. Appl

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Thanks. Application deletes data every 48hrs of older data. Auto compaction works but as space is full ..errorlog only says not enough space to run compaction. On Friday, September 17, 2021, Bowen Song wrote: > If major compaction is failing due to disk space constraint, you could > copy the fi

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Bowen Song
If major compaction is failing due to disk space constraint, you could copy the files to another server and run a major compaction there instead (i.e.: start cassandra on new server but not joining the existing cluster). If you must replace the node, at least use the '-Dcassandra.replace_addres

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Close 300 gb data. Nodetool decommission/removenode and added back one node ans it came back to 22Gb. Cant run major compaction as no space much left. On Friday, September 17, 2021, Bowen Song wrote: > Okay, so how big exactly is the data on disk? You said removing and adding > a new node gives

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Dipan Shah
4:53 PM To: user@cassandra.apache.org Subject: Re: High disk usage casaandra 3.11.7 Assuming your total disk space is a lot bigger than 50GB in size (accounting for disk space amplification, commit log, logs, OS data, etc.), I would suspect the disk space is being used by something else. Have y

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Bowen Song
Okay, so how big exactly is the data on disk? You said removing and adding a new node gives you 20GB on disk, was that done via the '-Dcassandra.replace_address=...' parameter? If not, the new node will almost certainly have a different token range and not directly comparable to the existing no

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Yes i checked and cleared all snapshots and also i had incremental backups in backup folder ..i removed the same .. its purely data.. On Friday, September 17, 2021, Bowen Song wrote: > Assuming your total disk space is a lot bigger than 50GB in size > (accounting for disk space amplification, c

Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Bowen Song
Assuming your total disk space is a lot bigger than 50GB in size (accounting for disk space amplification, commit log, logs, OS data, etc.), I would suspect the disk space is being used by something else. Have you checked that the disk space is actually being used by the cassandra data director

High disk usage casaandra 3.11.7

2021-09-16 Thread Abdul Patel
Hello We have cassandra with leveledcompaction strategy, recently found filesystem almost 90% full but the data was only 10m records. Manual compaction will work? As not sure its recommended and space is also constraint ..tried removing and adding one node and now data is at 20GB which looks appro