Re: Compactions are stuck in 4.0.5 version

2023-01-14 Thread vaibhav khedkar
be > adjusted to avoid such a large number of rows being stored within one > partition. > > – Scott > > On Jan 13, 2023, at 9:24 PM, vaibhav khedkar wrote: > > > Hello All, > > We are facing an issue where few of the nodes are not able to complete > compactions

Re: Compactions are stuck in 4.0.5 version

2023-01-13 Thread C. Scott Andreas
3, at 9:24 PM, vaibhav khedkar wrote:Hello All, We are facing an issue where few of the nodes are not able to complete compactions. We tried restarting, scrubbing and even rebuilding an entire node but nothing seems to work so far. It's a 10 Region installation with close to 150 nodes. Datatax support

Compactions are stuck in 4.0.5 version

2023-01-13 Thread vaibhav khedkar
Hello All, We are facing an issue where few of the nodes are not able to complete compactions. We tried restarting, scrubbing and even rebuilding an entire node but nothing seems to work so far. It's a 10 Region installation with close to 150 nodes. Datatax support <https://support.data

Re: Help determining pending compactions

2022-11-07 Thread Richard Hesse
Thanks for the tip Eric. We're actually on 3.2 and the issue isn't with the Reaper. The issue is with Cassandra. It will report that a table has pending compactions, but it will never actually start compacting. The pending number stays at that level until we run a manual compaction. -ri

RE: Help determining pending compactions

2022-11-07 Thread Eric Ferrenbach
: Sunday, October 30, 2022 12:07 PM To: user@cassandra.apache.org Subject: Help determining pending compactions [WARNING - EXTERNAL EMAIL] Do not open links or attachments unless you recognize the sender of this email. If you are unsure please click the button "Report suspicious email"

Re: Help determining pending compactions

2022-10-30 Thread Richard Hesse
hoping to get some help with a vexing issue with one of our > keyspaces. During Reaper repair sessions, one keyspace will end up with > hanging, non-started compactions. That is, the number of compactions as > reported by nodetool compactionstats stays flat and there are no running &g

Re: Help determining pending compactions

2022-10-30 Thread Dinesh Joshi
ace will end up with > hanging, non-started compactions. That is, the number of compactions as > reported by nodetool compactionstats stays flat and there are no running > compactions. Is there a way to determine which tables Cassandra is stuck on > here? > > Looking at graph

Help determining pending compactions

2022-10-30 Thread Richard Hesse
Hi, I'm hoping to get some help with a vexing issue with one of our keyspaces. During Reaper repair sessions, one keyspace will end up with hanging, non-started compactions. That is, the number of compactions as reported by nodetool compactionstats stays flat and there are no running compac

Re: slow compactions

2022-03-06 Thread Bowen Song
Is there anything interesting in the system.log and GC log? What does "nodetool tpstats" show when the node is doing the slow compactions? By the way, Cassandra 3.11.2 is 4 years old, you really should consider upgrading. On 06/03/2022 09:09, onmstester onmstester wrote: Forgot

Re: slow compactions

2022-03-06 Thread onmstester onmstester
Forgot to mention that i'm using default STCS for all tables On Sun, 06 Mar 2022 12:29:52 +0330 onmstester onmstester wrote Hi, Sometimes compactions getting so slow (a few KBs per second for each compaction) on a few nodes which would be fixed temporarily by resta

slow compactions

2022-03-06 Thread onmstester onmstester
Hi, Sometimes compactions getting so slow (a few KBs per second for each compaction) on a few nodes which would be fixed temporarily by restarting  restarting cassandra (although would coming back a few hours later). Copied sstables related to slow compactions to a isolated/single node

Re: Anti Compactions while running repair

2020-11-09 Thread manish khandelwal
Thanks Alex On Mon, Nov 9, 2020 at 12:36 PM Alexander DEJANOVSKI wrote: > Only sstables at unrepaired state go through anticompaction. > > Le lun. 9 nov. 2020 à 07:01, manish khandelwal < > manishkhandelwa...@gmail.com> a écrit : > >> Thanks Alex. >> >> One more query, all are sstables (repaired

Re: Anti Compactions while running repair

2020-11-08 Thread Alexander DEJANOVSKI
Only sstables at unrepaired state go through anticompaction. Le lun. 9 nov. 2020 à 07:01, manish khandelwal a écrit : > Thanks Alex. > > One more query, all are sstables (repaired + unrepaired ) part of > anti-compaction? We are using full repair with -pr option. > > Regards > Manish > > On Mon,

Re: Anti Compactions while running repair

2020-11-08 Thread manish khandelwal
Thanks Alex. One more query, all are sstables (repaired + unrepaired ) part of anti-compaction? We are using full repair with -pr option. Regards Manish On Mon, Nov 9, 2020 at 11:17 AM Alexander DEJANOVSKI wrote: > Hi Manish, > > Anticompaction is the same whether you run full or incremental r

Re: Anti Compactions while running repair

2020-11-08 Thread Alexander DEJANOVSKI
Hi Manish, Anticompaction is the same whether you run full or incremental repair. Le ven. 6 nov. 2020 à 04:37, manish khandelwal a écrit : > In documentation it is given that while running incremental repairs, anti > compaction is done which results in repaired and unrepaired sstables. Since >

Anti Compactions while running repair

2020-11-05 Thread manish khandelwal
In documentation it is given that while running incremental repairs, anti compaction is done which results in repaired and unrepaired sstables. Since anti compaction also runs with full repair and primary range repairs, I have the following question: Is anti compaction different in case of full r

Re: Multiple compactions to same disk with 3.11.4

2019-10-01 Thread Matthias Pfau
You are right, you could set concurrent_compactors to 1 to just allow a single compaction at a time. However, that isn't feasible in our scenario with multiple data dirs as compactions would accumulate. We wan't to run multiple compactions in parallel but only one per data di

Re: Multiple compactions to same disk with 3.11.4

2019-10-01 Thread Elliott Sims
rtunately, we are running into problems with the compaction > scheduling, now. From time to time, a bunch of compactions (e.g. 6) are > scheduled for the same data dir. This makes no sense for spinning disks as > it will slow down all compactions and other operations like flushes > drama

Multiple compactions to same disk with 3.11.4

2019-10-01 Thread Matthias Pfau
Hi there, we recently upgraded from 2.2 to 3.11.4. Unfortunately, we are running into problems with the compaction scheduling, now. From time to time, a bunch of compactions (e.g. 6) are scheduled for the same data dir. This makes no sense for spinning disks as it will slow down all

Re: Repairs/compactions on tables with solr indexes

2019-08-08 Thread Dinesh Joshi
en repairs are run, does it initiate rebuilds of solr indexes? Does it > rebuild only when any data is repaired? > 2. How about the compactions, does it trigger any search indexes rebuilds? I > guess not, since data is not getting changed, but not sure. Or maybe when it > cleans tom

Repairs/compactions on tables with solr indexes

2019-08-07 Thread Ayub M
Hello, we are using DSE Search workload with Search and Cass running on same nodes/jvm. 1. When repairs are run, does it initiate rebuilds of solr indexes? Does it rebuild only when any data is repaired? 2. How about the compactions, does it trigger any search indexes rebuilds? I guess not, since

RE: TWCS Compactions & Tombstones

2019-03-27 Thread Nick Hatfield
Awesome, thanks again! From: Jeff Jirsa [mailto:jji...@gmail.com] Sent: Wednesday, March 27, 2019 1:36 PM To: cassandra Subject: Re: TWCS Compactions & Tombstones You would need to swap your class from the com.jeffjirsa variant (probably from 2.1 / 2.2) to the official TWCS class. Once

Re: TWCS Compactions & Tombstones

2019-03-27 Thread Jeff Jirsa
had not seen this yet. So we have this > enabled, I guess it will just take time to finally chew through it all? > > > > *From:* Jeff Jirsa [mailto:jji...@gmail.com] > *Sent:* Tuesday, March 26, 2019 9:41 PM > *To:* user@cassandra.apache.org > *Subject:* Re: TWCS Compactions

RE: TWCS Compactions & Tombstones

2019-03-27 Thread Nick Hatfield
Awesome, thank you Jeff. Sorry I had not seen this yet. So we have this enabled, I guess it will just take time to finally chew through it all? From: Jeff Jirsa [mailto:jji...@gmail.com] Sent: Tuesday, March 26, 2019 9:41 PM To: user@cassandra.apache.org Subject: Re: TWCS Compactions

Re: TWCS Compactions & Tombstones

2019-03-26 Thread James Brown
Have you tried enabling 'unchecked_tombstone_compaction' on the affected tables? On Tue, Mar 26, 2019 at 5:01 AM Nick Hatfield wrote: > How does one properly rid of sstables that have fallen victim to > overlapping timestamps? I realized that we had TWCS set in our CF which > also had a read_rep

RE: TWCS Compactions & Tombstones

2019-03-26 Thread Nick Hatfield
mpaction': 'true'}' AND default_time_to_live = 7884009 AND gc_grace_seconds = 86400 AND read_repair_chance = 0 Whats the best way to examine the sstable data so that I can verify that it is old data, other than by the min / max timestamps? Thanks for your help

Re: TWCS Compactions & Tombstones

2019-03-26 Thread Jeff Jirsa
Or Upgrade to a version with https://issues.apache.org/jira/browse/CASSANDRA-13418 and enable that feature -- Jeff Jirsa > On Mar 26, 2019, at 6:23 PM, Rahul Singh wrote: > > What's your timewindow? Roughly how much data is in each window? > > If you examine the sstable data and see that

Re: TWCS Compactions & Tombstones

2019-03-26 Thread Rahul Singh
What's your timewindow? Roughly how much data is in each window? If you examine the sstable data and see that is truly old data with little chance that it has any new data, you can just remove the SStables. You can do a rolling restart -- take down a node, remove mc-254400-* and then start it up.

TWCS Compactions & Tombstones

2019-03-26 Thread Nick Hatfield
How does one properly rid of sstables that have fallen victim to overlapping timestamps? I realized that we had TWCS set in our CF which also had a read_repair = 0.1 and after correcting this to 0.0 I can clearly see the affects over time on the new sstables. However, I still have old sstables t

Re: 1.2.19: AssertionError when running compactions on a CF with TTLed columns

2018-12-11 Thread Reynald Borer
th different sizes and it confused Cassandra. So, problem solved now :-) Cheers, Reynald On Fri, Aug 31, 2018 at 7:45 AM Reynald Borer wrote: > Hi everyone, > > I'm running a Cassandra 1.2.19 cluster of 40 nodes and compactions of a > specific column family are sporadically raising

1.2.19: AssertionError when running compactions on a CF with TTLed columns

2018-08-30 Thread Reynald Borer
Hi everyone, I'm running a Cassandra 1.2.19 cluster of 40 nodes and compactions of a specific column family are sporadically raising an AssertionError like this (full stack trace visible under https://gist.github.com/rborer/46862d6d693c0163aa8fe0e74caa2d9a): ERROR [CompactionExecutor:9137]

Re: Auto Compactions not running on Cassandra 3.10

2018-08-01 Thread Anshul Rathore
Hi It would be great if anyone can point us in right direction. On Mon, Jul 30, 2018 at 12:07 PM Anshul Rathore wrote: > Thanks Jeff for your response ,and apologies for such a delayed response , > had some personal emergency. > > So Following the config which we are using for that table > PRI

Re: Infinite loop of single SSTable compactions

2018-07-30 Thread Martin Mačura
Hi Rahul, the table TTL is 24 months. Oldest data is 22 months, so no expirations yet. Compacted partition maximum bytes: 17 GB - yeah, I know that's not good, but we'll have to wait for the TTL to make it go away. More recent partitions are kept under 100 MB by bucketing. The data model: CREAT

Re: Auto Compactions not running on Cassandra 3.10

2018-07-29 Thread Anshul Rathore
Thanks Jeff for your response ,and apologies for such a delayed response , had some personal emergency. So Following the config which we are using for that table PRIMARY KEY ((customer_app_prefix, customer_session_id), beacon_client_type, sim_created_at) ) WITH CLUSTERING ORDER BY (beacon_client_

Re: Infinite loop of single SSTable compactions

2018-07-26 Thread Rahul Singh
Few questions What is your maximumcompactedbytes across the cluster for this table ? What’s your TTL ? What does your data model look like as in what’s your PK? Rahul On Jul 25, 2018, 1:07 PM -0400, James Shaw , wrote: > nodetool compactionstats  --- see compacting which table > nodetool cfstats

Re: Infinite loop of single SSTable compactions

2018-07-25 Thread James Shaw
nodetool compactionstats --- see compacting which table nodetool cfstats keyspace_name.table_name --- check partition side, tombstones go the data file directories: look the data file size, timestamp, --- compaction will write to new temp file with _tmplink..., use sstablemetadata ...

Infinite loop of single SSTable compactions

2018-07-25 Thread Martin Mačura
Hi, we have a table which is being compacted all the time, with no change in size: Compaction History: compacted_atbytes_inbytes_out rows_merged 2018-07-25T05:26:48.101 57248063878 57248063878 {1:11655} 2018-07-25T01:09:47.346 57248063878 57248063878 {1:11655}

Re: Auto Compactions not running on Cassandra 3.10

2018-07-04 Thread Jeff Jirsa
The DTCS windowing algorithm is timestamp sensitive - are you using milliseconds or microseconds in your writes? What’s your exact DTCS config for that table? 3.10 has TWCS, too, which may be considerably easier to use. It also uses timestamp units in its config, so you’ll need to know which re

Auto Compactions not running on Cassandra 3.10

2018-07-04 Thread Anshul Rathore
Hi We are using 4 node cassandra 3.10 cluster. For some reason autocompaction is not running on one of the table which uses DTCS. TTL on this table is 3 months. Table has high write load , and medium read load. We have 4 disks per node , each disk growed to around 5k-6k sstables going back to aro

Auto Compactions not running on Cassandra 3.10

2018-05-07 Thread Anshul Rathore
Hi We are using 4 node cassandra 3.10 cluster. For some reason autocompaction is not running on one of the table which uses DTCS. TTL on this table is 3 months. Table has high write load , and medium read load. We have 4 disks per node , each disk growed to around 5k-6k sstables going back to aro

Re: Adding new nodes to cluster to speedup pending compactions

2018-04-28 Thread Mikhail Tsaplin
04-28 8:03 GMT+07:00 Evelyn Smith : > Hi Mikhall, > > There are a few ways to speed up compactions in the short term: > *- nodetool setcompactionthroughput 0* > This will unthrottle compactions but obviously unthrottling compactions > puts you at risk of high latency while

Re: Adding new nodes to cluster to speedup pending compactions

2018-04-27 Thread Evelyn Smith
Hi Mikhall, There are a few ways to speed up compactions in the short term: - nodetool setcompactionthroughput 0 This will unthrottle compactions but obviously unthrottling compactions puts you at risk of high latency while compactions are running. - nodetool setconcurrentcompactors 2 You

Re: Adding new nodes to cluster to speedup pending compactions

2018-04-27 Thread Jonathan Haddad
Your compaction time won't improve immediately simply by adding nodes because the old data still needs to be cleaned up. What's your end goal? Why is having a spike in pending compaction tasks following a massive write an issue? Are you seeing a dip in performance, violating an SLA, or do you ju

Re: Adding new nodes to cluster to speedup pending compactions

2018-04-27 Thread Mikhail Tsaplin
The cluster has 5 nodes of d2.xlarge AWS type (32GB RAM, Attached SSD disks), Cassandra 3.0.9. Increased compaction throughput from 16 to 200 - active compaction remaining time decreased. What will happen if another node will join the cluster? - will former nodes move part of theirs SSTables to the

Re: Adding new nodes to cluster to speedup pending compactions

2018-04-27 Thread Nicolas Guyomar
Hi Mikhail, Could you please provide : - your cluster version/topology (number of nodes, cpu, ram available etc) - what kind of underlying storage you are using - cfstat using -H option cause I'm never sure I'm converting bytes=>GB You are storing 1Tb per node, so long running compaction is not r

Adding new nodes to cluster to speedup pending compactions

2018-04-27 Thread Mikhail Tsaplin
Hi, I have a five nodes C* cluster suffering from a big number of pending compaction tasks: 1) 571; 2) 91; 3) 367; 4) 22; 5) 232 Initially, it was holding one big table (table_a). With Spark, I read that table, extended its data and stored in a second table_b. After this copying/extending process

Re: Index summary redistribution seems to block all compactions

2017-10-25 Thread Sotirios Delimanolis
be  https://issues.apache.org/jira/browse/CASSANDRA-13873 On Tue, Oct 24, 2017 at 11:18 PM, Sotirios Delimanolis wrote: On a Cassandra 2.2.11 cluster, I noticed estimated compactions accumulating on one node. nodetool compactionstats showed the following:                 compaction type 

Re: Index summary redistribution seems to block all compactions

2017-10-25 Thread Marcus Eriksson
Anything in the logs? It *could* be https://issues.apache.org/jira/browse/CASSANDRA-13873 On Tue, Oct 24, 2017 at 11:18 PM, Sotirios Delimanolis < sotodel...@yahoo.com.invalid> wrote: > On a Cassandra 2.2.11 cluster, I noticed estimated compactions > accumulating on one no

Index summary redistribution seems to block all compactions

2017-10-24 Thread Sotirios Delimanolis
On a Cassandra 2.2.11 cluster, I noticed estimated compactions accumulating on one node. nodetool compactionstats showed the following:                 compaction type    keyspace         table   completed        total    unit   progress                     Compaction         ks1    some_table

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-15 Thread kurt greaves
I believe that's the decompressed data size, so if your data is heavily compressed it might be perfectly logical for you to be doing such large compactions. Worth checking what SSTables are included in the compaction. If you've been running STCS for a while you probably just have a few

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
o make the bootstrap incremental, I have been throttling the streams on > all nodes to 1Mbits. I have selectively unthrottling one node at a time > hoping that would unlock some routines compacting away redundant data > (you'll see that nodetool netstats reports back fewer nodes th

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
bits. I have selectively unthrottling one node at a time >> hoping that would unlock some routines compacting away redundant data >> (you'll see that nodetool netstats reports back fewer nodes than nodetool >> status). >> * Since compactions have had the tendency of getti

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
unthrottling one node at a time > hoping that would unlock some routines compacting away redundant data (you'll > see that nodetool netstats reports back fewer nodes than nodetool status). > * Since compactions have had the tendency of getting stuck (hundreds pending > but none e

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
would unlock some routines compacting away redundant data (you'll see that nodetool netstats reports back fewer nodes than nodetool status). * Since compactions have had the tendency of getting stuck (hundreds pending but none executing) in previous bootstraps, I've tried issuing a manual

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
wrote: >>> >>> Hi all, >>> >>> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so >>> far. >>> Based on the source code it seems that this option doesn't affect >>> compactions while bootstrapping.

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
ni wrote: > > Hi all, > > I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so > far. > Based on the source code it seems that this option doesn't affect > compactions while bootstrapping. > > I am getting quite confused as it seems I am

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
What version? Single disk or JBOD? Vnodes? -- Jeff Jirsa > On Oct 15, 2017, at 12:49 PM, Stefano Ortolani wrote: > > Hi all, > > I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so far. > Based on the source code it seems that this optio

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
Hi all, I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so far. Based on the source code it seems that this option doesn't affect compactions while bootstrapping. I am getting quite confused as it seems I am not able to bootstrap a node if I don't hav

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Bruce Tietjen
We started seeing this behavior before we even discovered that it was possible to run manual compactions or cancel compactions by ID. On Fri, Oct 13, 2017 at 5:58 PM, Jeff Jirsa wrote: > Is it possible someone/something is running 'nodetool compact' explicitly? > That would ca

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Jeff Jirsa
Is it possible someone/something is running 'nodetool compact' explicitly? That would cause the behavior you're seeing. On Fri, Oct 13, 2017 at 4:24 PM, Bruce Tietjen < bruce.tiet...@imatsolutions.com> wrote: > > We are new to Cassandra and have built a test cluster and loaded some data > into

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Bruce Tietjen
: > What's the compaction strategy are you using? level compaction or size > tiered compaction? > > On Fri, Oct 13, 2017 at 4:31 PM, Bruce Tietjen < > bruce.tiet...@imatsolutions.com> wrote: > >> I hadn't noticed that is is now attempting

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Dikang Gu
What's the compaction strategy are you using? level compaction or size tiered compaction? On Fri, Oct 13, 2017 at 4:31 PM, Bruce Tietjen < bruce.tiet...@imatsolutions.com> wrote: > I hadn't noticed that is is now attempting two impossible co

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Bruce Tietjen
I hadn't noticed that is is now attempting two impossible compactions: id compaction type keyspace table completed totalunit progress a7d1b130-b04c-11e7-bfc8-79870a3c4039 Compaction perfectsearch cxml 1.73 TiB 5.04 TiB bytes 34.36% b7b98890

Re: Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Jon Haddad
Can you paste the output of cassandra compactionstats? What you’re describing should not happen. There’s a check that drops sstables out of a compaction task if there isn’t enough available disk space, see https://issues.apache.org/jira/browse/CASSANDRA-12979

Cassandra 3.11.0 compaction attempting impossible to complete compactions

2017-10-13 Thread Bruce Tietjen
We are new to Cassandra and have built a test cluster and loaded some data into the cluster. We are seeing compaction behavior that seems to violate what we read about it's behavior. Our cluster is configured with JBOD with 3 3.6T disks. Those disks currently respectively have the following used/

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-13 Thread Stefano Ortolani
Other little update: at the same time I see the number of pending tasks stuck (in this case at 1847); restarting the node doesn't help, so I can't really force the node to "digest" all those compactions. In the meanwhile the disk occupied is already twice the average load I

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-13 Thread Stefano Ortolani
mal? I was under the impression only the necessary SSTables were going to be streamed... Thanks for the help, Stefano On Wed, Aug 23, 2017 at 1:37 PM, kurt greaves wrote: > But if it also streams, it means I'd still be under-pressure if I am not >> mistaken. I am under the assumpt

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> > But if it also streams, it means I'd still be under-pressure if I am not > mistaken. I am under the assumption that the compactions are the by-product > of streaming too many SStables at the same time, and not because of my > current write load. > Ah yeah I wasn't

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
stream, but just not > join the ring until you trigger a joinRing with JMX. (You'll have to look > up the specific JMX call to make, there is one somewhere...) > But if it also streams, it means I'd still be under-pressure if I am not mistaken. I am under the assumption that the com

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> ​1) You mean restarting the node in the middle of the bootstrap with > join_ring=false? Would this option require me to issue a nodetool boostrap > resume, correct? I didn't know you could instruct the join via JMX. Would > it be the same of the nodetool boostrap command? write_survey is slightl

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
n > over JMX when you are ready (once compactions catch up). > 2) A 1TB compaction might be completely plausible if you are using > compression and lots of SSTables are being streamed into L0 and triggering > STCS compactions, or being compacted with L1. > > ​ >

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
you are ready (once compactions catch up). 2) A 1TB compaction might be completely plausible if you are using compression and lots of SSTables are being streamed into L0 and triggering STCS compactions, or being compacted with L1. ​

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, sorry, I forgot to specify. I am on 3.0.14. Cheers, Stefano On Wed, Aug 23, 2017 at 12:11 AM, kurt greaves wrote: > What version are you running? 2.2 has an improvement that will retain > levels when streaming and this shouldn't really happen. If you're on 2.1 > best bet is to upgrade

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-22 Thread kurt greaves
What version are you running? 2.2 has an improvement that will retain levels when streaming and this shouldn't really happen. If you're on 2.1 best bet is to upgrade

Bootstrapping a node fails because of compactions not keeping up

2017-08-22 Thread Stefano Ortolani
than 2000 compactions pending. I have tried to increase the compaction threads, un-throttle the throughput, but no luck. Also I've tried reduce the streaming throughput (on all nodes this) as much as possible (1 Mb/sec). The thing is that even if I manage to reduce the streaming throu

Re: Restoring a table cassandra - compactions

2017-06-01 Thread Marcus Eriksson
On Thu, Jun 1, 2017 at 4:25 PM, Jean Carlo wrote: > Hello. > > During the restore of a table using its snapshot and nodetool refresh, I > could see that cassandra starts to make a lot of compactions (depending on > the size of the data). > > I wanted to know why and I f

Restoring a table cassandra - compactions

2017-06-01 Thread Jean Carlo
Hello. During the restore of a table using its snapshot and nodetool refresh, I could see that cassandra starts to make a lot of compactions (depending on the size of the data). I wanted to know why and I found this in the code of cassandra 2.1.14. for CASSANDRA-4872 +// force

Re: too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Carlos Rolo
>>>> the machine? >>>> On Fri, Apr 7, 2017 at 11:19 AM Giri P wrote: >>>> >>>>> Hi, >>>>> >>>>> we are continuously loading a table which has properties properties >>>>> compaction strategy LCS and bloom filter off and compactions are not >>>>> catching up . Even the compaction is running slow on that table even after >>>>> we increases throughput and concurrent compactors. >>>>> >>>>> Can someone point me to what I should be looking to tune this ? >>>>> >>>>> Thanks >>>>> Giri >>>>> >>>> >>> >> > -- --

Re: too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Matija Gobec
her load on >>> the machine? >>> On Fri, Apr 7, 2017 at 11:19 AM Giri P wrote: >>> >>>> Hi, >>>> >>>> we are continuously loading a table which has properties properties >>>> compaction strategy LCS and bloom filter off and compactions

Re: too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Giri P
e? >> On Fri, Apr 7, 2017 at 11:19 AM Giri P wrote: >> >>> Hi, >>> >>> we are continuously loading a table which has properties properties >>> compaction strategy LCS and bloom filter off and compactions are not >>> catching up . Even t

Re: too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Giri P
you reloading it? > Is compaction throttled? What disks are you using? Any other load on the > machine? > On Fri, Apr 7, 2017 at 11:19 AM Giri P wrote: > >> Hi, >> >> we are continuously loading a table which has properties properties >> compaction strategy LCS a

Re: too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Jonathan Haddad
; compaction strategy LCS and bloom filter off and compactions are not > catching up . Even the compaction is running slow on that table even after > we increases throughput and concurrent compactors. > > Can someone point me to what I should be looking to tune this ? > > Thanks > Giri >

too many compactions pending and compaction is slow on few tables

2017-04-07 Thread Giri P
Hi, we are continuously loading a table which has properties properties compaction strategy LCS and bloom filter off and compactions are not catching up . Even the compaction is running slow on that table even after we increases throughput and concurrent compactors. Can someone point me to what

Re: large number of pending compactions, sstables steadily increasing

2016-11-10 Thread Eiti Kimura
on it around here? > > 2016-11-07 20:15 GMT+00:00 Ben Slater : > >> What I’ve seen happen a number of times is you get in a negative feedback >> loop: >> not enough capacity to keep up with compactions (often triggered by >> repair or compaction hitting a larg

Re: large number of pending compactions, sstables steadily increasing

2016-11-07 Thread Benjamin Roth
here? 2016-11-07 20:15 GMT+00:00 Ben Slater : > What I’ve seen happen a number of times is you get in a negative feedback > loop: > not enough capacity to keep up with compactions (often triggered by repair > or compaction hitting a large partition) -> more sstables -> more

Re: large number of pending compactions, sstables steadily increasing

2016-11-07 Thread Ben Slater
What I’ve seen happen a number of times is you get in a negative feedback loop: not enough capacity to keep up with compactions (often triggered by repair or compaction hitting a large partition) -> more sstables -> more expensive reads -> even less capacity to keep up with compactions

Re: large number of pending compactions, sstables steadily increasing

2016-11-07 Thread Eiti Kimura
> values), or mostly inserting new ones? If you're only inserting new >> data, it's probable using size-tiered compaction would work better for >> you. If you are TTL'ing whole rows, consider date-tiered. >> >> If leveled compaction is still the best

Re: How to throttle up/down compactions without a restart

2016-10-20 Thread Jeff Jirsa
;user@cassandra.apache.org" Date: Thursday, October 20, 2016 at 9:54 PM To: "user@cassandra.apache.org" , "thomasjul...@zoho.com" Subject: Re: How to throttle up/down compactions without a restart You can throttle compactions using nodetool setcompactionthroughput . Where x is i

Re: How to throttle up/down compactions without a restart

2016-10-20 Thread kurt Greaves
You can throttle compactions using nodetool setcompactionthroughput . Where x is in mbps. If you're using 2.2 or later this applies immediately to all running compactions, otherwise it applies on any "new" compactions. You will want to be careful of allowing compactions to utilis

How to throttle up/down compactions without a restart

2016-10-20 Thread Thomas Julian
Hello, I was going through this presentation and the Slide-55 caught my attention. i.e) "Throttled down compactions during high load period, throttled up during low load period" Can we throttle down compactions without a restart? If this can be done, what are all the

Re: large number of pending compactions, sstables steadily increasing

2016-09-11 Thread Jens Rantil
eveled compaction is still the best strategy, one way to catch up > with compactions is to have less data per partition -- in other words, > use more machines. Leveled compaction is CPU expensive. You are CPU > bottlenecked currently, or from the other perspective, you have too > much

Re: large number of pending compactions, sstables steadily increasing

2016-08-19 Thread Mark Rose
eveled compaction is still the best strategy, one way to catch up with compactions is to have less data per partition -- in other words, use more machines. Leveled compaction is CPU expensive. You are CPU bottlenecked currently, or from the other perspective, you have too much data per node for lev

Re: large number of pending compactions, sstables steadily increasing

2016-08-17 Thread Ezra Stuetzel
> How many concurrent compactors? > > > > > > > > *From: *Ezra Stuetzel > *Reply-To: *"user@cassandra.apache.org" > *Date: *Wednesday, August 17, 2016 at 11:39 AM > *To: *"user@cassandra.apache.org" > *Subject: *large number of pendin

Re: large number of pending compactions, sstables steadily increasing

2016-08-17 Thread Jeff Jirsa
e.org" Date: Wednesday, August 17, 2016 at 11:39 AM To: "user@cassandra.apache.org" Subject: large number of pending compactions, sstables steadily increasing I have one node in my cluster 2.2.7 (just upgraded from 2.2.6 hoping to fix issue) which seems to be stuck in a weird s

large number of pending compactions, sstables steadily increasing

2016-08-17 Thread Ezra Stuetzel
I have one node in my cluster 2.2.7 (just upgraded from 2.2.6 hoping to fix issue) which seems to be stuck in a weird state -- with a large number of pending compactions and sstables. The node is compacting about 500gb/day, number of pending compactions is going up at about 50/day. It is at about

Re: compactions stuck and restart doesn't kill it

2016-08-08 Thread Surbhi Gupta
I have faced similar kind of issue while we were using LCS ... We ran the scrub and it was helpful to clear the corruption and compaction caught up well after that ... On 8 August 2016 at 13:23, John Wong wrote: > Hi. > > I don't see any error logs indicate corrupted sstable error. Is it safe t

Re: compactions stuck and restart doesn't kill it

2016-08-08 Thread John Wong
Hi. I don't see any error logs indicate corrupted sstable error. Is it safe to run this? Thanks. On Mon, Aug 8, 2016 at 1:18 PM, Surbhi Gupta wrote: > Can you see if any of the sstable is corrupt ? > I have seen with my past experience , if any of the sstable which is part > of the compaction

Re: compactions stuck and restart doesn't kill it

2016-08-08 Thread Surbhi Gupta
Can you see if any of the sstable is corrupt ? I have seen with my past experience , if any of the sstable which is part of the compaction is corrupt then compaction kind of hangs. Try to run nodetool scrub ... On 8 August 2016 at 09:34, John Wong wrote: > On Mon, Aug 8, 2016 at 11:16 AM, Surbh

Re: compactions stuck and restart doesn't kill it

2016-08-08 Thread John Wong
On Mon, Aug 8, 2016 at 11:16 AM, Surbhi Gupta wrote: > Once you restart a node compaction will start automatically, if u dont > want to do so then do > nodetool disableautocompaction as soon as node is started . > > Thanks. I certainly can give that a try for the specific column family, but even

Re: compactions stuck and restart doesn't kill it

2016-08-08 Thread Surbhi Gupta
stats > pending tasks: 2 > compaction typekeyspace column family > completed total unit progress >Compactionmy_columnfamily 0 > 410383527 bytes 0.00% > Active compaction remaining time : 0h00m06s >

  1   2   3   >