>>> to my question above would help second guess my a decision a bit less :)
>>>
>>> Cheers,
>>> Stefano
>>>
>>> On Mon, May 25, 2015 at 9:52 AM, Jason Wee wrote:
>>>
>>>> , due to a really intensive delete workloads, th
>>> promoted to t..
>>>
>>> Is cassandra design for *delete* workloads? doubt so. Perhaps looking at
>>> some other alternative like ttl?
>>>
>>> jason
>>>
>>> On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar <
>>>
mmendation seems to be
>>> that leveled compaction is suited for read intensive workloads.
>>>
>>> Depending on your use case, you might better of with data tiered or size
>>> tiered strategy.
>>>
>>> regards
>>>
>>> regards
>&g
tion seems to be that
>> leveled compaction is suited for read intensive workloads.
>>
>> Depending on your use case, you might better of with data tiered or size
>> tiered strategy.
>>
>> regards
>>
>> regards
>>
>>> On Sun, May 24, 2015
ed strategy.
>
> regards
>
> regards
>
> On Sun, May 24, 2015 at 10:50 AM, Stefano Ortolani
> wrote:
>
>> Hi all,
>>
>> I have a question re leveled compaction strategy that has been bugging me
>> quite a lot lately. Based on what I understood, a compacti
strategy.
regards
regards
On Sun, May 24, 2015 at 10:50 AM, Stefano Ortolani
wrote:
> Hi all,
>
> I have a question re leveled compaction strategy that has been bugging me
> quite a lot lately. Based on what I understood, a compaction takes place
> when the SSTable gets to a specific
PM, PARASHAR, BHASKARJYA JAY wrote:
> Thanks Sankalp…I will look at these.
>
> ** **
>
> *From:* sankalp kohli [mailto:kohlisank...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 3:22 PM
> *To:* user@cassandra.apache.org
>
> *Subject:* Re: Leveled Compaction, nu
Thanks Sankalp...I will look at these.
From: sankalp kohli [mailto:kohlisank...@gmail.com]
Sent: Tuesday, July 09, 2013 3:22 PM
To: user@cassandra.apache.org
Subject: Re: Leveled Compaction, number of SStables growing.
Do you have lot of sstables in L0?
Since you moved from size tiered
ill have to increase the size.
>
> ** **
>
> *From:* Jake Luciani [mailto:jak...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 2:05 PM
> *To:* user
> *Subject:* Re: Leveled Compaction, number of SStables growing.
>
> ** **
>
> We run with 128mb some run wit
Thanks Jake. Guess we will have to increase the size.
From: Jake Luciani [mailto:jak...@gmail.com]
Sent: Tuesday, July 09, 2013 2:05 PM
To: user
Subject: Re: Leveled Compaction, number of SStables growing.
We run with 128mb some run with 256mb. Leveled compaction creates fixed sized
sstables
We run with 128mb some run with 256mb. Leveled compaction creates fixed
sized sstables by design so this is the only way to lower the file count.
On Tue, Jul 9, 2013 at 2:56 PM, PARASHAR, BHASKARJYA JAY wrote:
> Hi,
>
> ** **
>
> We recently switched from size tired compaction to Leveled c
m');>]
> *Sent:* 08 March 2013 15:57
> *To:* user@cassandra.apache.org 'user@cassandra.apache.org');>
> *Subject:* Re: leveled compaction
>
> ** **
>
> It is SSTable counts in each level.
>
> SSTables in each level: [40/4, 442/10, 97, 967,
Cool ! So of we exceed the threshold, is that an issue… ?
From: Yuki Morishita [mailto:mor.y...@gmail.com]
Sent: 08 March 2013 15:57
To: user@cassandra.apache.org
Subject: Re: leveled compaction
It is SSTable counts in each level.
SSTables in each level: [40/4, 442/10, 97, 967, 7691, 0, 0, 0
It is SSTable counts in each level.
> SSTables in each level: [40/4, 442/10, 97, 967, 7691, 0, 0, 0]
So you have 40 SSTables in L0, 442 in L1, 97 in L2 and so forth.
'40/4' and '442/10' have numbers after slash, those are expected maximum number
of
SSTables in that level and only displayed when
I would be careful with the patch that was referred to above, it
hasn't been reviewed, and from a glance it appears that it will cause
an infinite compaction loop if you get more than 4 SSTables at max size.
it will, you need to setup max sstable size correctly.
On Sat, Nov 10, 2012 at 7:17 PM, Edward Capriolo wrote:
> No it does not exist. Rob and I might start a donation page and give
> the money to whoever is willing to code it. If someone would write a
> tool that would split an sstable into 4 smaller sstables (even an
> offline command line tool)
S
No it does not exist. Rob and I might start a donation page and give
the money to whoever is willing to code it. If someone would write a
tool that would split an sstable into 4 smaller sstables (even an
offline command line tool) I would paypal them a hundo.
On Sat, Nov 10, 2012 at 1:10 PM, Aaron
Nope. I think at least once a week I hear someone suggest one way to solve
their problem is to "write an sstablesplit tool".
I'm pretty sure that:
Step 1. Write sstablesplit
Step 2. ???
Step 3. Profit!
On Sat, Nov 10, 2012 at 9:40 AM, Alain RODRIGUEZ wrote:
> @Rob Coli
>
> Does the "sstable
@Rob Coli
Does the "sstablesplit" function exists somewhere ?
2012/11/10 Jim Cistaro
> For some of our clusters, we have taken the periodic major compaction
> route.
>
> There are a few things to consider:
> 1) Once you start major compacting, depending on data size, you may be
> committed to
For some of our clusters, we have taken the periodic major compaction
route.
There are a few things to consider:
1) Once you start major compacting, depending on data size, you may be
committed to doing it periodically because you create one big file that
will take forever to naturally compact aga
On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burruss wrote:
> my question is would leveled compaction help to get rid of the tombstoned
> data faster than size tiered, and therefore reduce the disk space usage?
You could also...
1) run a major compaction
2) code up sstablesplit
3) profit!
This meth
The rules for tombstone eviction are as follows (regardless of your
compaction strategy):
1. gc_grace must be expired, and
2. No other row fragments can exist for the row that aren't also
participating in the compaction.
For LCS, there is no 'rule' that the tombstones can only be evicted at the
h
On 2012-11-08, at 1:12 PM, B. Todd Burruss wrote:
> we are having the problem where we have huge SSTABLEs with tombstoned data in
> them that is not being compacted soon enough (because size tiered compaction
> requires, by default, 4 like sized SSTABLEs). this is using more disk space
> th
@ben, thx, we will be deploying 2.2.1 of DSE soon and will try to
setup a traffic sampling node so we can test leveled compaction.
we essentially keep a rolling window of data written once. it is
written, then after N days it is deleted, so it seems that leveled
compaction should help
On Thu, No
thanks for the links! i had forgotten about live sampling
On Thu, Nov 8, 2012 at 11:41 AM, Brandon Williams wrote:
> On Thu, Nov 8, 2012 at 1:33 PM, Aaron Turner wrote:
>> There are also ways to bring up a test node and just run Level Compaction on
>> that. Wish I had a URL handy, but hopefull
Also to answer your question, LCS is well suited to workloads where
overwrites and tombstones come into play. The tombstones are _much_ more
likely to be merged with LCS than STCS.
I would be careful with the patch that was referred to above, it hasn't
been reviewed, and from a glance it appears t
On Thu, Nov 8, 2012 at 1:33 PM, Aaron Turner wrote:
> There are also ways to bring up a test node and just run Level Compaction on
> that. Wish I had a URL handy, but hopefully someone else can find it.
This rather handsome fellow wrote a blog about it:
http://www.datastax.com/dev/blog/whats-new
http://www.datastax.com/docs/1.1/operations/tuning#testing-compaction-and-compression
Write Survey mode.
After you have it up and running you can modify the column family mbean to
use LeveledCompactionStrategy on that node to see how your hardware/load
fares with LCS.
On Thu, Nov 8, 2012 at 11:
LCS works well in specific circumstances, this blog post gives some good
considerations: http://www.datastax.com/dev/blog/when-to-use-leveled-compaction
On Nov 8, 2012, at 1:33 PM, Aaron Turner wrote:
> "kill performance" is relative. Leveled Compaction basically costs 2x disk
> IO. Look at
"kill performance" is relative. Leveled Compaction basically costs 2x disk
IO. Look at iostat, etc and see if you have the headroom.
There are also ways to bring up a test node and just run Level Compaction
on that. Wish I had a URL handy, but hopefully someone else can find it.
Also, if you'r
we are running Datastax enterprise and cannot patch it. how bad is
"kill performance"? if it is so bad, why is it an option?
On Thu, Nov 8, 2012 at 10:17 AM, Radim Kolar wrote:
> Dne 8.11.2012 19:12, B. Todd Burruss napsal(a):
>
>> my question is would leveled compaction help to get rid of the
Dne 8.11.2012 19:12, B. Todd Burruss napsal(a):
my question is would leveled compaction help to get rid of the
tombstoned data faster than size tiered, and therefore reduce the disk
space usage?
leveled compaction will kill your performance. get patch from jira for
maximum sstable size per CF
for details, open conf/log4j-server.properties and add following configuration:
log4j.logger.org.apache.cassandra.db.compaction.LeveledManifest=DEBUG
fyi.
maki
2012/4/10 Jonathan Ellis :
> CompactionExecutor doesn't have level information available to it; it
> just compacts the sstables it's t
CompactionExecutor doesn't have level information available to it; it
just compacts the sstables it's told to. But if you enable debug
logging on LeveledManifest you'd see what you want. ("Compaction
candidates for L{} are {}")
2012/4/5 Radim Kolar :
> it would be really helpfull if leveled comp
If you would like to see a change create a request for an improvement here
https://issues.apache.org/jira/browse/CASSANDRA
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/04/2012, at 12:51 PM, Radim Kolar wrote:
> it would be really hel
It looks like what you're seeing is, stress far outpaced the ability
of compaction to keep up (which is normal for our default settings,
which prioritize maintaining request throughput over compaction), so
LCS will grab a bunch of L0 sstables, compact them together with L1
resulting in a spike of L
Jonathan Ellis gmail.com> writes:
> You should look at the org.apache.cassandra.db.compaction package and
> read the original leveldb implementation notes at
> http://leveldb.googlecode.com/svn/trunk/doc/impl.html for more
> details.
>
There is an important rule in
http://leveldb.googlecode.co
On Fri, Dec 2, 2011 at 8:13 PM, liangfeng wrote:
> 1.There is no implementation in cassandra1.0 to ensure the conclusion "Only
> enough space for 10x the sstable size needs to be reserved for temporary use
> by
> compaction",so one special compaction may need big free disk space all the
> same.
Jonathan Ellis gmail.com> writes:
>
> I think you're confusing "temporary space used during a compaction
> operation" with "total i/o done by compaction."
>
> Leveled compaction *will* do more i/o than size-tiered, because it's
> enforcing tighter guarantees on how compacted the data is.
>
I think you're confusing "temporary space used during a compaction
operation" with "total i/o done by compaction."
Leveled compaction *will* do more i/o than size-tiered, because it's
enforcing tighter guarantees on how compacted the data is.
On Fri, Dec 2, 2011 at 1:01 AM, liangfeng wrote:
> He
40 matches
Mail list logo