om> wrote:
>
>> Read queries on a secondary index are somehow causing an excessively high
>> CPU load on all nodes in my DC.
>>
> ...
>
>> What really surprised me is that executing a single query on this
>> secondary index makes the "Local read count" i
On Tue, Sep 15, 2015 at 7:44 AM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:
> Read queries on a secondary index are somehow causing an excessively high
> CPU load on all nodes in my DC.
>
...
> What really surprised me is that executing a single query on this
>
Read queries on a secondary index are somehow causing an excessively high
CPU load on all nodes in my DC.
The table has some 60K records, and the cardinality of the index is very
low (~10 distinct values). The returned result set typically contains
10-30K records.
The same queries on nodes in
I have a DC of 4 nodes that must be expanded to accommodate an expected
growth in data. Since the DC is not using vnodes, we have decided to set up
a new DC with vnodes enabled, start using the new DC, and decommission the
old DC.
Both DCs have 4 nodes. The idea is to add additional nodes to the n
I'm still struggling with finding the root cause for such CPU
utilisation patterns.
http://i58.tinypic.com/24pifcy.jpg
After a 3 weeks after C* restart CPU utilisation is going through the
roof, such situation isn't happening shortly after the restart (which
is visible at the graph).
C* is runni
Yup... it seems like it's gc fault
gc logs
2015-07-21T14:19:54.336+: 2876133.270: Total time for which
application threads were stopped: 0.0832030 seconds
2015-07-21T14:19:55.739+: 2876134.673: Total time for which
application threads were stopped: 0.0806960 seconds
2015-07-21T14:19:57.14
just a guess, gc?
On Mon, Jul 20, 2015 at 3:15 PM, Marcin Pietraszek
wrote:
> Hello!
>
> I've noticed a strange CPU utilisation patterns on machines in our
> cluster. After C* daemon restart it behaves in a normal way, after a
> few weeks since a restart CPU usage starts to raise. Currently on o
Hello!
I've noticed a strange CPU utilisation patterns on machines in our
cluster. After C* daemon restart it behaves in a normal way, after a
few weeks since a restart CPU usage starts to raise. Currently on one
of the nodes (screenshots attached) cpu load is ~4. Shortly before
restart load raise
Upgrade from 2.0.3. There are several bugs,
On Wednesday, February 19, 2014, Yogi Nerella wrote:
> You should start your Cassandra daemon with -verbose:gc (please check
syntax) and then run it in foreground, as Cassandra closes the standard out)
> Please see other emails in this forum for getting
You should start your Cassandra daemon with -verbose:gc (please check
syntax) and then run it in foreground, as Cassandra closes the standard out)
Please see other emails in this forum for getting Garbage Collection
Statistics from Cassandra user mail, or look at any Java specific sites.
Ex:
http:
How do I get that statistic?
On Wed, Feb 19, 2014 at 10:34 PM, Yogi Nerella wrote:
> Could be your -Xmn800M is too low, that is why it is trying garbage
> collecting very frequently.
> Do you have any statistics on how much memory it is collecting on every
> cycle?
>
>
>
> On Wed, Feb 19, 2014 a
Could be your -Xmn800M is too low, that is why it is trying garbage
collecting very frequently.
Do you have any statistics on how much memory it is collecting on every
cycle?
On Wed, Feb 19, 2014 at 8:47 AM, Sourabh Agrawal wrote:
> Below is CPU usage from top. I don't see any steal. Idle time
Below is CPU usage from top. I don't see any steal. Idle time is pretty low.
Cpu(s): 83.3%us, 14.5%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 1.7%si,
0.0%st
Any other pointers?
On Wed, Feb 19, 2014 at 8:34 PM, Nate McCall wrote:
> You may be seeing steal from another tenant on the VM. This art
You may be seeing steal from another tenant on the VM. This article has a
good explanation:
http://blog.scoutapp.com/articles/2013/07/25/understanding-cpu-steal-time-when-should-you-be-worried
In short, kill the instance and launch a new one. Depending on your latency
requirements and operational
Hi,
I am running cassandra 2.0.3 cluster on 4 AWS nodes. memory arguments are
the following for each node :
-Xms8G -Xmx8G -Xmn800M
I am experiencing consistent high loads on one of the nodes. Each node is
getting approximately equal number of writes. I tried to have a look at the
logs and seems l
from Ganglia is high CPU load on this server and also number
> of TCP connection on port 9160 is around 600+ all the time.The distribution
> of these connections say that we have connections from this machine to other
> DC machines are around 90 odd each. For port 7000 its around 45.
Cou
I have a multiDC ring with 6 nodes in each DC.
I have a single node which runs some jobs (including Hadoop Map-Reduce with
PIG) every 15minutes.
Lately there has been high CPU load and memory issues on this node.
What I could see from Ganglia is high CPU load on this server and also number
of
17 matches
Mail list logo