t; What version of Cassandra?
>
> On Dec 16, 2014 6:36 PM, "Arne Claassen" wrote:
> That's just the thing. There is nothing in the logs except the constant
> ParNew collections like
>
> DEBUG [ScheduledTasks:1] 2014-12-16 19:03:35,042 GCInspector.java (line 118)
>
Cassandra 2.0.10 and Datastax Java Driver 2.1.1
On Dec 16, 2014, at 4:48 PM, Ryan Svihla wrote:
> What version of Cassandra?
>
> On Dec 16, 2014 6:36 PM, "Arne Claassen" wrote:
> That's just the thing. There is nothing in the logs except the constant
> ParN
e what it would solve for you.
>
> Compaction running could explain a high load, logs messages with ERRORS,
> WARN, GCInspector are all meaningful there, I suggest search jira for your
> version to see if there are any interesting bugs.
>
>
>
> On Tue, Dec 16, 2014 at 6
ause can be many other problems that are solvable
> with current hardware, and LOTS of people runs with nodes with similar
> configuration.
>
> On Tue, Dec 16, 2014 at 5:08 PM, Arne Claassen wrote:
> Not using any secondary indicies and memtable_flush_queue_size is the default
> 4
es? and if so how
> many? what is your flush queue set to?
>
> On Tue, Dec 16, 2014 at 4:43 PM, Arne Claassen wrote:
> Of course QA decided to start a test batch (still relatively low traffic), so
> I hope it doesn't throw the tpstats off too much
>
> Node 1:
>
at 2:18 PM, Ryan Svihla wrote:
>
> Ok based on those numbers I have a theory..
>
> can you show me nodetool tptats for all 3 nodes?
>
> On Tue, Dec 16, 2014 at 4:04 PM, Arne Claassen wrote:
>>
>> No problem with the follow up questions. I'm on a crash course h
p partition key batches.
>
> nodetool cfhistograms
>
> On Tue, Dec 16, 2014 at 3:42 PM, Arne Claassen wrote:
>>
>> Actually not sure why the machine was originally configured at 6GB since
>> we even started it on an r3.large with 15GB.
>>
>> Re: Batches
>
to run Cassandra well in, especially if you're going full bore on
> loads. However, you maybe just flat out be CPU bound on your write
> throughput, how many TPS and what size writes do you have? Also what is
> your widest row?
>
> Final question what is compaction throughput at?
the
> tunings as indicated in
> https://issues.apache.org/jira/browse/CASSANDRA-8150
>
> On Tue, Dec 16, 2014 at 3:06 PM, Arne Claassen wrote:
>>
>> Changed the 15GB node to 25GB heap and the nice CPU is down to ~20% now.
>> Checked my dev cluster to see if the ParNew
I have a time series table consisting of frame information for media. The
table is partitioned on the media ID and uses time and some other frame
level keys as cluster keys, i.e. all frames for a one piece of media is
really one column family "row", even though it is represented in CQL as a
ordered
replayAllFailedBatches
Is that just routine scheduled house-keeping or a sign of something else?
On Tue, Dec 16, 2014 at 12:52 PM, Arne Claassen wrote:
>
> Sorry, I meant 15GB heap on the one machine that has less nice CPU% now.
> The others are 6GB
>
> On Tue, Dec 16, 2014 at 12:50 PM, Arne
Sorry, I meant 15GB heap on the one machine that has less nice CPU% now.
The others are 6GB
On Tue, Dec 16, 2014 at 12:50 PM, Arne Claassen wrote:
>
> AWS r3.xlarge, 30GB, but only using a Heap of 10GB, new 2GB because we
> might go c3.2xlarge instead if CPU is more important than RAM
an Svihla wrote:
>
> What's CPU, RAM, Storage layer, and data density per node? Exact heap
> settings would be nice. In the logs look for TombstoneOverflowingException
>
>
> On Tue, Dec 16, 2014 at 1:36 PM, Arne Claassen wrote:
>>
>> I'm running 2.0.10.
>&g
png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax> <https://github.com/datastax/>
>
> On Tue, Dec 16, 2014 at 2:04 PM, Arne Claassen wrote:
>>
>> I have a three node cluster that has been sitting at a load of 4 (for
I have a three node cluster that has been sitting at a load of 4 (for each
node), 100% CPI utilization (although 92% nice) for that last 12 hours,
ever since some significant writes finished. I'm trying to determine what
tuning I should be doing to get it out of this state. The debug log is just
an
15 matches
Mail list logo