We are using 3.11.1 (which we recently upgraded from 3.11.0) and just
started experimenting with LeveledCompactionStrategy. After loading data
for 24 hours, we started getting the following error:
ERROR [CompactionExecutor:628] 2017-10-27 11:58:21,748
CassandraDaemon.java:228 - Exception in thread
We run a fairly small production Cassandra 2.2.4 cluster with 5 nodes on
Rackspace VMs, (4 cores, 4GB RAM, SSD backed) and whilst these nodes are on the
small side, day to day it has kept up with our workload fine.
We currently use SizeTieredCompactionStrategy and want to move to the
LeveledS
Hello, I have a question about the tombstone removal process for leveled
compaction strategy. I am migrating a lot of text data from a cassandra
column family to elastic search. The column family uses leveled compaction
strategy. As part of the migration, I am deleting the migrated rows from
anning
>>> several SStables with other compaction strategies (and hence leading to
>>> high latency read queries).
>>>
>>> I was honestly thinking of scraping and rebuilding the SStable from
>>> scratch if this workload is confirmed to be temporary. Knowing the answer
g the answer
>> to my question above would help second guess my a decision a bit less :)
>>
>> Cheers,
>> Stefano
>>
>> On Mon, May 25, 2015 at 9:52 AM, Jason Wee wrote:
>>
>>> , due to a really intensive delete workloads, the SSTable is
on
>>
>> On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar <
>> khangaon...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> For a delete intensive workload ( translate to write intensive), is
>>> there any reason to use leveled compaction ? The reco
ternative like ttl?
>
> jason
>
>> On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar
>> wrote:
>> Hi,
>>
>> For a delete intensive workload ( translate to write intensive), is there
>> any reason to use leveled compaction ? The recommenda
intensive workload ( translate to write intensive), is there
> any reason to use leveled compaction ? The recommendation seems to be that
> leveled compaction is suited for read intensive workloads.
>
> Depending on your use case, you might better of with data tiered or size
> tier
Hi,
For a delete intensive workload ( translate to write intensive), is there
any reason to use leveled compaction ? The recommendation seems to be that
leveled compaction is suited for read intensive workloads.
Depending on your use case, you might better of with data tiered or size
tiered
Hi all,
I have a question re leveled compaction strategy that has been bugging me
quite a lot lately. Based on what I understood, a compaction takes place
when the SSTable gets to a specific size (10 times the size of its previous
generation). My question is about an edge case where, due to a
Check the size of your individual files. If your largest file is already more
than half then you can’t compact it using leveled compaction either. You can
take the system offline, split the largest file (I believe there is an
sstablesplit utility and I imagine it allows you to take off the tail
torage impacting commands or nuances do you gave to consider when
you switch to leveled compaction? For instance, nodetool cleanup says
"Running the nodetool cleanup command causes a temporary increase in disk space
usage proportional to the size of your largest SSTable."
Are sstab
What other storage impacting commands or nuances do you gave to consider
when you switch to leveled compaction? For instance, nodetool cleanup says
"Running the nodetool cleanup command causes a temporary increase in disk
space usage proportional to the size of your largest SSTable.&
I may have misunderstood, but it seems that he was already using
LeveledCompaction
On Tue, Apr 7, 2015 at 3:17 AM, DuyHai Doan wrote:
> If you have SSD, you may afford switching to leveled compaction strategy,
> which requires much less than 50% of the current dataset for free space
>
If you have SSD, you may afford switching to leveled compaction strategy,
which requires much less than 50% of the current dataset for free space
Le 5 avr. 2015 19:04, "daemeon reiydelle" a écrit :
> You appear to have multiple java binaries in your path. That needs to be
> re
n LeveledCompactionStrategy should be able to
> compact my data, well at least this is what I understand.
>
> < compaction as the size of the largest column family. Leveled compaction
> needs much less space for compaction, only 10 * sstable_size_in_mb.
> However, even if you’re
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.
The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used
almost 100% of the drive. The other nodes refuse to continue compaction
claiming that there is not enough disk space
I was reading this
http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra and
need some confirmation:
A Sizing
"*Each level is ten times as large as the previous*"
In the comments:
At October 14, 2011 at 12:33
am<http://www.datastax.com/dev/blog/leveled-compact
Thanks for the response Rob,
And yes, the relevel helped the bloom filter issue quite a bit, although it
took a couple of days for the relevel to complete on a single node (so if
anyone tried this, be prepared)
-Mike
Sent from my iPhone
On Sep 23, 2013, at 6:34 PM, Robert Coli wrote:
> On F
On Fri, Sep 13, 2013 at 4:27 AM, Michael Theroux wrote:
> Another question on [the topic of row fragmentation when old rows get a
> large append to their "end" resulting in larger-than-expected bloom
> filters].
>
> Would forcing the table to relevel help this situation? I believe the
> process t
appended to columns in a row in the target column family. Both column
> families are using leveled compaction, and both column families have over 100
> million rows.
>
> However, our bloom filters on the target column family grow dramatically
> (less than double) after co
olumn family. Both column families
are using leveled compaction, and both column families have over 100 million
rows.
However, our bloom filters on the target column family grow dramatically (less
than double) after converting less than 1/4 of the data. I assume this is
because new changes ar
LCS fragmentation comes up a lot here and this issue caught a lot of us on
IRC by surprise so I'm going to pass it on here:
https://issues.apache.org/jira/browse/CASSANDRA-5271
See this thread for additional context:
http://www.mail-archive.com/user@cassandra.apache.org/msg31416.html
PM, PARASHAR, BHASKARJYA JAY wrote:
> Thanks Sankalp…I will look at these.
>
> ** **
>
> *From:* sankalp kohli [mailto:kohlisank...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 3:22 PM
> *To:* user@cassandra.apache.org
>
> *Subject:* Re: Leveled Compaction, nu
Thanks Sankalp...I will look at these.
From: sankalp kohli [mailto:kohlisank...@gmail.com]
Sent: Tuesday, July 09, 2013 3:22 PM
To: user@cassandra.apache.org
Subject: Re: Leveled Compaction, number of SStables growing.
Do you have lot of sstables in L0?
Since you moved from size tiered
ill have to increase the size.
>
> ** **
>
> *From:* Jake Luciani [mailto:jak...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 2:05 PM
> *To:* user
> *Subject:* Re: Leveled Compaction, number of SStables growing.
>
> ** **
>
> We run with 128mb some run wit
Thanks Jake. Guess we will have to increase the size.
From: Jake Luciani [mailto:jak...@gmail.com]
Sent: Tuesday, July 09, 2013 2:05 PM
To: user
Subject: Re: Leveled Compaction, number of SStables growing.
We run with 128mb some run with 256mb. Leveled compaction creates fixed sized
sstables
We run with 128mb some run with 256mb. Leveled compaction creates fixed
sized sstables by design so this is the only way to lower the file count.
On Tue, Jul 9, 2013 at 2:56 PM, PARASHAR, BHASKARJYA JAY wrote:
> Hi,
>
> ** **
>
> We recently switched from size tired compac
Hi,
We recently switched from size tired compaction to Leveled compaction. We made
this change because our rows are frequently updated. We also have a lot of data.
With size-tiered compaction, we have about 5-10 sstables per CF. So with about
15 CF's we had about 100 sstables.
With a ss
dra to rebuild the
sstables as bigger once I have updated the column family definition ?
thanks
>
> cheers
>
>
>> -Wei
>>
>> --
>> *From: *"Franc Carter"
>> *To: *user@cassandra.apache.org
>> *Sent: *Sunday, Jun
n 100MB. Do your own test o
> find a "right" number.
>
> -Wei
>
> --
> *From: *"Franc Carter"
> *To: *user@cassandra.apache.org
> *Sent: *Sunday, June 16, 2013 10:15:22 PM
> *Subject: *Re: Large number of files for Leveled Compaction
&
lt;mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Sunday, June 16, 2013 11:37 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>, Wei Zhu
mailto:wz1...@yahoo.com>>
Subject: Re: L
day, June 16, 2013 10:15:22 PM
> *Subject: *Re: Large number of files for Leveled Compaction
>
>
>
>
> On Mon, Jun 17, 2013 at 2:59 PM, Manoj Mainali wrote:
>
>> Not in the case of LeveledCompaction. Only SizeTieredCompaction merges
>> smaller sstables into large ones
Correction, the largest I heard is 256MB SSTable size.
- Original Message -
From: "Wei Zhu"
To: user@cassandra.apache.org
Sent: Sunday, June 16, 2013 10:28:25 PM
Subject: Re: Large number of files for Leveled Compaction
default value of 5MB is way too small in practice
age -
From: "Franc Carter"
To: user@cassandra.apache.org
Sent: Sunday, June 16, 2013 10:15:22 PM
Subject: Re: Large number of files for Leveled Compaction
On Mon, Jun 17, 2013 at 2:59 PM, Manoj Mainali < mainalima...@gmail.com >
wrote:
Not in the case of LeveledCompac
ou can refer to this page
> http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra on
> details of how LeveledCompaction works.
>
>
Yes, but it seems I've misinterpreted that page ;-(
I took this paragraph
In figure 3, new sstables are added to the first level, L0,
Not in the case of LeveledCompaction. Only SizeTieredCompaction merges
smaller sstables into large ones. With the LeveledCompaction, the sstables
are always of fixed size but they are grouped into different levels.
You can refer to this page
http://www.datastax.com/dev/blog/leveled-compaction-in
sstables ?
thanks
> Cheers
>
> Manoj
>
>
> On Fri, Jun 7, 2013 at 1:44 PM, Franc Carter wrote:
>
>>
>> Hi,
>>
>> We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks
>> like it may be a win for us.
>>
>> The first step of t
On Fri, Jun 7, 2013 at 2:44 PM, Franc Carter wrote:
>
> Hi,
>
> We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks
> like it may be a win for us.
>
> The first step of testing was to push a fairly large slab of data into the
> Column Family - we did
lot of sstable counts.
Cheers
Manoj
On Fri, Jun 7, 2013 at 1:44 PM, Franc Carter wrote:
>
> Hi,
>
> We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks
> like it may be a win for us.
>
> The first step of testing was to push a fairly large slab of d
Hi,
We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks like
it may be a win for us.
The first step of testing was to push a fairly large slab of data into the
Column Family - we did this much faster (> x100) than we would in a
production environment. This has left the Col
mailto:user@cassandra.apache.org>"
> mailto:user@cassandra.apache.org>>, Wei Zhu <
> wz1...@yahoo.com<mailto:wz1...@yahoo.com>>
> > Date: Friday, March 8, 2013 11:11 AM
> > To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
&
m');>]
> *Sent:* 08 March 2013 15:57
> *To:* user@cassandra.apache.org 'user@cassandra.apache.org');>
> *Subject:* Re: leveled compaction
>
> ** **
>
> It is SSTable counts in each level.
>
> SSTables in each level: [40/4, 442/10, 97, 967,
Cool ! So of we exceed the threshold, is that an issue… ?
From: Yuki Morishita [mailto:mor.y...@gmail.com]
Sent: 08 March 2013 15:57
To: user@cassandra.apache.org
Subject: Re: leveled compaction
It is SSTable counts in each level.
SSTables in each level: [40/4, 442/10, 97, 967, 7691, 0, 0, 0
It is SSTable counts in each level.
> SSTables in each level: [40/4, 442/10, 97, 967, 7691, 0, 0, 0]
So you have 40 SSTables in L0, 442 in L1, 97 in L2 and so forth.
'40/4' and '442/10' have numbers after slash, those are expected maximum number
of
SSTables in that level and only displayed when
Hi -
Can someone explain the meaning for the levelled compaction in cfstats -
SSTables in each level: [40/4, 442/10, 97, 967, 7691, 0, 0, 0]
SSTables in each level: [61/4, 9, 92, 945, 8146, 0, 0, 0]
SSTables in each level: [34/4, 1000/10, 100, 953, 8184, 0, 0, 0
Thanks,
Kanwar
t;user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
> mailto:user@cassandra.apache.org>>, Wei Zhu
> mailto:wz1...@yahoo.com>>
> Date: Friday, March 8, 2013 11:11 AM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
&g
AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Size Tiered -> Leveled Compaction
I have the same wonder.
We started with the default 5M and the compaction after repair takes too long
on 200G node, so we
utiful on paper.
Thanks.
-Wei
From: Alain RODRIGUEZ
To: user@cassandra.apache.org
Cc: Wei Zhu
Sent: Friday, March 8, 2013 1:25 AM
Subject: Re: Size Tiered -> Leveled Compaction
I'm still wondering about how to chose the size of the sstable under LCS.
Defaul is 5
t for
>> fun, you can look at a file called $CFName.json in your data directory and
>> it tells you the SSTable distribution among different levels.
>>
>> -Wei
>>
>> --
>> *From:* Charles Brophy
>> *To:* user@cassandra.apac
------
> *From:* Charles Brophy
> *To:* user@cassandra.apache.org
> *Sent:* Thursday, February 14, 2013 8:29 AM
> *Subject:* Re: Size Tiered -> Leveled Compaction
>
> I second these questions: we've been looking into changing some of our CFs
> to use leveled compac
<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Size Tiered -> Leveled Compaction
"After running a major compaction, automatic minor compactions are no longer
triggered,"
... Because of the size difference between the big sstable ge
:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
> Date: Monday, February 25, 2013 7:15 AM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@
o:user@cassandra.apache.org>>
Subject: Re: Size Tiered -> Leveled Compaction
"I am confused. I thought running compact turns off the minor compactions and
users are actually supposed to run upgradesstables (maybe I am on old
documentation?)"
Well, that's not true. What ha
you run a major compaction on a 10GB CF, you
have almost no chance of getting that (big) sstable compacted again. You
will have to wait for other sstables to reach this size or run an other
major compaction.
But anyways, this doesn't apply here because we are speaking of LCS
(leveled compaction s
"user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Sunday, February 24, 2013 7:45 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Size Tiered -> Le
larly well suited for Leveled Compaction). We have two more to convert,
but those will wait until next weekend. So far no issues, and, we've seen some
positive results.
To help answer some of my own questions I posed in this thread, and others have
expressed interest in knowing, the st
6137
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 23/02/2013, at 6:56 AM, Mike wrote:
> Hello,
>
> Still doing research before we potentially move one of our column families
> from Size Tiered->Leve
Hello,
Still doing research before we potentially move one of our column
families from Size Tiered->Leveled compaction this weekend. I was doing
some research around some of the bugs that were filed against leveled
compaction in Cassandra and I found this:
https://issues.apache.org/j
mance.
- Original Message -
From: "Mike"
To: user@cassandra.apache.org
Sent: Sunday, February 17, 2013 4:50:40 AM
Subject: Re: Size Tiered -> Leveled Compaction
Hello Wei,
First thanks for this response.
Out of curiosity, what SSTable size did you choose for your useca
tion among different levels.
-Wei
*From:* Charles Brophy
*To:* user@cassandra.apache.org
*Sent:* Thursday, February 14, 2013 8:29 AM
*Subject:* Re: Size Tiered -> Leveled Compaction
I second these questions: we've been looking into changing some of our
ows found while scrubbing %s; Those have been written (in
> order) to a new sstable (%s)"
>
> In the logs.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/02/2013
;>
*To:* user@cassandra.apache.org <mailto:user@cassandra.apache.org>
*Sent:* Thursday, February 14, 2013 8:29 AM
*Subject:* Re: Size Tiered -> Leveled Compaction
I second these questions: we've been looking into changing some of
our CFs to use leveled compaction as well. If anybod
e the SSTable
> size.
>
> By the way, there is no concept of Major compaction for LCS. Just for fun,
> you can look at a file called $CFName.json in your data directory and it
> tells you the SSTable distribution among different levels.
>
> -Wei
>
> From: Charl
ata directory and it tells you
the SSTable distribution among different levels.
-Wei
From: Charles Brophy
To: user@cassandra.apache.org
Sent: Thursday, February 14, 2013 8:29 AM
Subject: Re: Size Tiered -> Leveled Compaction
I second these questions: we
I second these questions: we've been looking into changing some of our CFs
to use leveled compaction as well. If anybody here has the wisdom to answer
them it would be of wonderful help.
Thanks
Charles
On Wed, Feb 13, 2013 at 7:50 AM, Mike wrote:
> Hello,
>
> I'm investiga
Hello,
I'm investigating the transition of some of our column families from
Size Tiered -> Leveled Compaction. I believe we have some
high-read-load column families that would benefit tremendously.
I've stood up a test DB Node to investigate the transition. I
successfully alt
) to a new sstable (%s)"
>
> In the logs.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/02/2013, at 6:13 AM, Andre Sprenger
> wrote:
>
> Hi,
>
> I
n the logs.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 12/02/2013, at 6:13 AM, Andre Sprenger wrote:
> Hi,
>
> I'm running a 6 node Cassandra 1.1.5 cluster on EC2. We have switched to
> lev
Hi,
I'm running a 6 node Cassandra 1.1.5 cluster on EC2. We have switched to
leveled compaction a couple of weeks ago,
this has been successful. Some days ago 3 of the nodes start to log the
following exception during compaction of
a particular column family:
ERROR [CompactionExecutor:726]
I would be careful with the patch that was referred to above, it
hasn't been reviewed, and from a glance it appears that it will cause
an infinite compaction loop if you get more than 4 SSTables at max size.
it will, you need to setup max sstable size correctly.
may be
> >>> committed to doing it periodically because you create one big file that
> >>> will take forever to naturally compact agaist 3 like sized files.
> >>> 2) If you rely heavily on file cache (rather than large row caches),
> each
> >>> major compac
to naturally compact agaist 3 like sized files.
>>> 2) If you rely heavily on file cache (rather than large row caches), each
>>> major compaction effectively invalidates the entire file cache beause
>>> everything is written to one new large file.
>>>
>>>
jor compaction effectively invalidates the entire file cache beause
>> everything is written to one new large file.
>>
>> --
>> Jim Cistaro
>>
>> On 11/9/12 11:27 AM, "Rob Coli" wrote:
>>
>> >On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burru
cache beause
> everything is written to one new large file.
>
> --
> Jim Cistaro
>
> On 11/9/12 11:27 AM, "Rob Coli" wrote:
>
> >On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burruss
> wrote:
> >> my question is would leveled compaction help to get rid
8, 2012 at 10:12 AM, B. Todd Burruss wrote:
>> my question is would leveled compaction help to get rid of the
>>tombstoned
>> data faster than size tiered, and therefore reduce the disk space usage?
>
>You could also...
>
>1) run a major compaction
>2) code up ss
On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burruss wrote:
> my question is would leveled compaction help to get rid of the tombstoned
> data faster than size tiered, and therefore reduce the disk space usage?
You could also...
1) run a major compaction
2) code up sstablesplit
3) profit!
g
> more disk space than we anticipated.
> >
> > we are very write heavy compared to reads, and we delete the data after
> N number of days (depends on the column family, but N is around 7 days)
> >
> > my question is would leveled compaction help to get rid of the
>
isk space
> than we anticipated.
>
> we are very write heavy compared to reads, and we delete the data after N
> number of days (depends on the column family, but N is around 7 days)
>
> my question is would leveled compaction help to get rid of the tombstoned
> data faster t
@ben, thx, we will be deploying 2.2.1 of DSE soon and will try to
setup a traffic sampling node so we can test leveled compaction.
we essentially keep a rolling window of data written once. it is
written, then after N days it is deleted, so it seems that leveled
compaction should help
On Thu
thanks for the links! i had forgotten about live sampling
On Thu, Nov 8, 2012 at 11:41 AM, Brandon Williams wrote:
> On Thu, Nov 8, 2012 at 1:33 PM, Aaron Turner wrote:
>> There are also ways to bring up a test node and just run Level Compaction on
>> that. Wish I had a URL handy, but hopefull
Also to answer your question, LCS is well suited to workloads where
overwrites and tombstones come into play. The tombstones are _much_ more
likely to be merged with LCS than STCS.
I would be careful with the patch that was referred to above, it hasn't
been reviewed, and from a glance it appears t
On Thu, Nov 8, 2012 at 1:33 PM, Aaron Turner wrote:
> There are also ways to bring up a test node and just run Level Compaction on
> that. Wish I had a URL handy, but hopefully someone else can find it.
This rather handsome fellow wrote a blog about it:
http://www.datastax.com/dev/blog/whats-new
:33 AM, Aaron Turner wrote:
> "kill performance" is relative. Leveled Compaction basically costs 2x
> disk IO. Look at iostat, etc and see if you have the headroom.
>
> There are also ways to bring up a test node and just run Level Compaction
> on that. Wish I had a
LCS works well in specific circumstances, this blog post gives some good
considerations: http://www.datastax.com/dev/blog/when-to-use-leveled-compaction
On Nov 8, 2012, at 1:33 PM, Aaron Turner wrote:
> "kill performance" is relative. Leveled Compaction basically costs 2x disk
&
"kill performance" is relative. Leveled Compaction basically costs 2x disk
IO. Look at iostat, etc and see if you have the headroom.
There are also ways to bring up a test node and just run Level Compaction
on that. Wish I had a URL handy, but hopefully someone else can find it.
we are running Datastax enterprise and cannot patch it. how bad is
"kill performance"? if it is so bad, why is it an option?
On Thu, Nov 8, 2012 at 10:17 AM, Radim Kolar wrote:
> Dne 8.11.2012 19:12, B. Todd Burruss napsal(a):
>
>> my question is would leveled compact
Dne 8.11.2012 19:12, B. Todd Burruss napsal(a):
my question is would leveled compaction help to get rid of the
tombstoned data faster than size tiered, and therefore reduce the disk
space usage?
leveled compaction will kill your performance. get patch from jira for
maximum sstable size per
, and we delete the data after N
number of days (depends on the column family, but N is around 7 days)
my question is would leveled compaction help to get rid of the tombstoned
data faster than size tiered, and therefore reduce the disk space usage?
thx
I think this JIRA answers your question:
https://issues.apache.org/jira/browse/CASSANDRA-2610
which in order not to duplicate work (creation of Merkle trees) repair
is done on all replicas for a range.
Cheers,
Omid
On Tue, Sep 25, 2012 at 8:27 AM, Sergey Tryuber wrote:
> Hi Radim
>
> Unfortuna
Hi Radim
Unfortunately number of compaction tasks is not overestimated. The number
is decremented one-by-one and this process takes several hours for our 40GB
node(( Also, when a lot of compaction tasks appears, we see that total disk
space used (via JMX) is doubled and Cassandra really tries to c
Repair process by itself is going well in a background, but the issue
I'm concerned is a lot of unnecessary compaction tasks
number in compaction tasks counter is over estimated. For example i have
1100 tasks left and if I will stop inserting data, all tasks will finish
within 30 minutes.
I
Hi Guys
We've noticed a strange behavior on our 3-nodes staging Cassandra cluster
with RF=2 and LeveledCompactionStrategy. When we run "nodetool repair
-pr" on a node, the other nodes start "validation"
process and when this process is finished one of the other 2 nodes reports
that there are app
On 3 August 2012 21:31, Data Craftsman 木匠 wrote:
>
> Nobody use Leveled Compaction with CQL 3.0 ?
I tried this, and I can't get it to work either.
I'm using:
[cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0]
Here's what my create table look
Nobody use Leveled Compaction with CQL 3.0 ?
-Z
On Tue, Jul 31, 2012 at 11:17 AM, Data Craftsman 木匠
wrote:
> Sorry for my stupid simple question. How to create a COLUMNFAMILY with
> Leveled Compaction?
>
> There is no example in documentation:
> http://www.datastax.com/docs/1.
Sorry for my stupid simple question. How to create a COLUMNFAMILY with
Leveled Compaction?
There is no example in documentation:
http://www.datastax.com/docs/1.1/configuration/storage_configuration#compaction-strategy
I try it on Cassandra 1.1.0 and 1.1.2, both failed. The COLUMNFAMILY
is still
tables it's told to. But if you enable debug
> logging on LeveledManifest you'd see what you want. ("Compaction
> candidates for L{} are {}")
>
> 2012/4/5 Radim Kolar :
>> it would be really helpfull if leveled compaction prints level into syslog.
>>
>
CompactionExecutor doesn't have level information available to it; it
just compacts the sstables it's told to. But if you enable debug
logging on LeveledManifest you'd see what you want. ("Compaction
candidates for L{} are {}")
2012/4/5 Radim Kolar :
> it would b
lly helpfull if leveled compaction prints level into syslog.
>
> Demo:
>
> INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043 CompactionTask.java
> (line 113) Compacting ***LEVEL 1***
> [SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db
it would be really helpfull if leveled compaction prints level into syslog.
Demo:
INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043
CompactionTask.java (line 113) Compacting ***LEVEL 1***
[SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db'),
SST
1 - 100 of 113 matches
Mail list logo