Yes the recommendation still applies
Wide partitions have huge impact on repair (over streaming), compaction and
bootstrap
Le 10 mai 2017 23:54, "Kant Kodali" a écrit :
Hi All,
Cassandra community had always been recommending 100MB per partition as a
sweet spot however does this limitation sti
Hi DuyHai,
I am trying to see what are the possible things we can do to get over this
limitation?
1. Would this https://issues.apache.org/jira/browse/CASSANDRA-7447 help at
all?
2. Can we have Merkle trees built for groups of rows in partition ? such
that we can stream only those groups where the
oh this looks like one I am looking for
https://issues.apache.org/jira/browse/CASSANDRA-9754. Is this in Cassandra
3.10 or merged somewhere?
On Thu, May 11, 2017 at 1:13 AM, Kant Kodali wrote:
> Hi DuyHai,
>
> I am trying to see what are the possible things we can do to get over this
> limitatio
I'm almost done with a rebased trunk patch. Hit a few snags. I want nothing
more to finish this thing... The latest issue was due to range tombstones and
the fact that the deletion time was being stored in the index from 3.0 onwards.
I hope to have everything pushed very shortly. Sorry for the d
Hi Experts,
Seeking your help on a production issue. We were running high write intensive
job on our 3 node cassandra cluster V 2.1.7.
TPS on nodes were high. Job ran for more than 2 days and thereafter, loadavg on
1 of the node increased to very high number like loadavg : 29.
System log repo
I've taken
CASSANDRA-13507
CASSANDRA-13517
-Jason
On Wed, May 10, 2017 at 9:45 PM, Lerh Chuan Low
wrote:
> I'll try my hand on https://issues.apache.org/jira/browse/CASSANDRA-13182.
>
> On 11 May 2017 at 05:59, Blake Eggleston wrote:
>
> > I've taken CASSANDRA-13194, CASSANDRA-13506, CASSANDR
Do you have a lot of compactions going on? It sounds like you might've built up
a huge backlog. Is your throttling configured properly?
> On 11 May 2017, at 18:50, varun saluja wrote:
>
> Hi Experts,
>
> Seeking your help on a production issue. We were running high write
> intensive job on o
What does nodetool compactionstats show?
I meant compaction throttling. nodetool getcompactionthrougput
> On 11 May 2017, at 19:41, varun saluja wrote:
>
> Hi Oskar,
>
> Thanks for response.
>
> Yes, could see lot of threads for compaction. Actually we are loading around
> 400GB data per
That seems way too low. Depending on what type of disk you have it should be
closer to 1-200MB.
That's probably causing your problems. It would still take a while for you to
compact all your data tho
Sent from my iPhone
> On 11 May 2017, at 19:50, varun saluja wrote:
>
> nodetool getcompacti
Hi,
PFB results for same. Numbers are scary here.
[root@WA-CASSDB2 bin]# ./nodetool compactionstats
pending tasks: 137
compaction type keyspace tablecompleted
totalunit progress
Compaction system hints 5762711108
837522
*nodetool getcompactionthrougput*
./nodetool getcompactionthroughput
Current compaction throughput: 16 MB/s
Regards,
Varun Saluja
On 11 May 2017 at 23:18, varun saluja wrote:
> Hi,
>
> PFB results for same. Numbers are scary here.
>
> [root@WA-CASSDB2 bin]# ./nodetool compactionstats
> pending
Hi Oskar,
Thanks for response.
Yes, could see lot of threads for compaction. Actually we are loading
around 400GB data per node on 3 node cassandra cluster.
Throttling was set to write around 7k TPS per node. Job ran fine for 2 days
and then, we start getting Mutation drops , longer GC and ver
This discussion should be on the C* user mailing list. Thanks!
best,
kjellman
> On May 11, 2017, at 10:53 AM, Oskar Kjellin wrote:
>
> That seems way too low. Depending on what type of disk you have it should be
> closer to 1-200MB.
> That's probably causing your problems. It would still take
Indeed, sorry. Subscribed to both so missed which one this was.
Sent from my iPhone
> On 11 May 2017, at 19:56, Michael Kjellman
> wrote:
>
> This discussion should be on the C* user mailing list. Thanks!
>
> best,
> kjellman
>
>> On May 11, 2017, at 10:53 AM, Oskar Kjellin wrote:
>>
>> T
Hi all,
In this JIRA ticket https://issues.apache.org/jira/browse/CASSANDRA-13486,
we proposed integrating our code to support a fast flash+FPGA card (called
CAPI Flash) only available in the ppc architecture. Although we will keep
discussing the topics specific to the patch (e.g. documentation, l
Hey all,
I'm on-board with what Rei is saying. I think we should be open to, and
encourage, other platforms/architectures for integration. Of course, it
will come down to specific maintainers/committers to do the testing and
verification on non-typical platforms. Hopefully those maintainers will
a
16 matches
Mail list logo