ose
> interface getting hammered, right?
>
>
>
> Thanks,
>
> Thomas Miller
>
>
>
> *From:* Andrei Ivanov [mailto:aiva...@iponweb.net]
> *Sent:* Thursday, April 23, 2015 4:40 PM
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: Adding New Node Iss
Thomas, just in case you missed it there is a bug with throughput setting
prior to 2.0.13, here is the link:
https://issues.apache.org/jira/browse/CASSANDRA-8852
So, it may happen you are setting it to 1600 megabytes
Andrei
On Thu, Apr 23, 2015 at 11:22 PM, Ali Akhtar wrote:
> What version are
Just in case it helps - we are running C* with sstable sizes of something
like 2.5 TB and ~4TB/node. No evident problems except the time it takes to
compact.
Andrei.
On Wed, Apr 22, 2015 at 5:36 PM, Anuj Wadehra
wrote:
> Thanks Robert!!
>
> The JIRA was very helpful in understanding how tombsto
Hi all,
I know, there was a thread with the same topic a while ago. But my problem
is that I'm seeing exactly the same behavior with C*2.0.13. I.e. compacted
sstables remain there after compaction for a long time (say ~24 hours,
never waited longer than that). Those sstables are removed upon resta
ly easy to implement it.
>
>
> On Tue, Nov 25, 2014 at 1:25 PM, Andrei Ivanov wrote:
>>
>> Nikolai,
>>
>> Just in case you've missed my comment in the thread (guess you have) -
>> increasing sstable size does nothing (in our case at least). That is,
>>
Nikolai,
Just in case you've missed my comment in the thread (guess you have) -
increasing sstable size does nothing (in our case at least). That is,
it's not worse but the load pattern is still the same - doing nothing
most of the time. So, I switched to STCS and we will have to live with
extra s
at write-heavy you should definitely go with STCS, LCS
> optimizes for reads by doing more compactions
>
> /Marcus
>
> On Tue, Nov 25, 2014 at 11:22 AM, Andrei Ivanov wrote:
>>
>> Hi Jean-Armel, Nikolai,
>>
>> 1. Increasing sstable size doesn't work
Hi Jean-Armel, Nikolai,
1. Increasing sstable size doesn't work (well, I think, unless we
"overscale" - add more nodes than really necessary, which is
prohibitive for us in a way). Essentially there is no change. I gave
up and will go for STCS;-(
2. We use 2.0.11 as of now
3. We are running on EC
y primary key's hash and then simply do something
> like mod 4 and add this to the table name :) This would effectively reduce
> the number of sstables and amount of data per table (CF). I kind of like
> this idea more - yes, a bit more challenge at coding level but obvious
> b
Nikolai,
This is more or less what I'm seeing on my cluster then. Trying to
switch to bigger sstables right now (1Gb)
On Mon, Nov 24, 2014 at 5:18 PM, Nikolai Grigoriev wrote:
> Andrei,
>
> Oh, Monday mornings...Tb :)
>
> On Mon, Nov 24, 2014 at 9:12 AM, Andrei Ivanov
tables
>>>> will never be compacted. Plus, it will require close to 2x disk space on
>>>> EVERY disk in my JBOD configuration...this will kill the node sooner or
>>>> later. This is all because all sstables after bootstrap end at L0 and then
>>>> the proce
se all sstables after bootstrap end at L0 and then
>>> the process slowly slowly moves them to other levels. If you have write
>>> traffic to that CF then the number of sstables and L0 will grow quickly -
>>> like it happens in my case now.
>>>
>>> Once so
Stephane,
We are having a somewhat similar C* load profile. Hence some comments
in addition Nikolai's answer.
1. Fallback to STCS - you can disable it actually
2. Based on our experience, if you have a lot of data per node, LCS
may work just fine. That is, till the moment you decide to join
anothe
Amazing how I missed the -Dcassandra.disable_stcs_in_l0=true option -
I have LeveledManifest source opened the whole day;-)
On Tue, Nov 18, 2014 at 9:15 PM, Andrei Ivanov wrote:
> Thanks a lot for your support, Marcus - that is useful beyond all
> recognition!;-) And I will try #6621 righ
means that compaction
>> > is
>> > not keeping up with your inserts and you should probably expand your
>> > cluster
>> > (or consider going back to SizeTieredCompactionStrategy for the tables
>> > that
>> > take that many writes)
>> >
ny files in L0 it means that compaction is
> not keeping up with your inserts and you should probably expand your cluster
> (or consider going back to SizeTieredCompactionStrategy for the tables that
> take that many writes)
>
> /Marcus
>
> On Tue, Nov 18, 2014 at 2:49 PM, Andrei Ivanov
g size tiered in L0 - if you have too many sstables
> in L0, we will do size tiered compaction on sstables in L0 to improve
> performance
>
> Use tools/bin/sstablemetadata to get the level for those sstables, if they
> are in L0, that is probably the reason.
>
> /Marcus
>
>
Dear all,
I have the following problem:
- C* 2.0.11
- LCS with default 160MB
- Compacted partition maximum bytes: 785939 (for cf/table xxx.xxx)
- Compacted partition mean bytes: 6750 (for cf/table xxx.xxx)
I would expect the sstables to be of +- maximum 160MB. Despite this I
see files like:
192M
18 matches
Mail list logo