Hello,
I am not sure what 'RT' stands for (I feel like I miss something obvious
:)) and there is no scales in this dashboard, it's hard for me to say
anything here.
Could you give us more details?
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain
The L
Dear All,
We are using C* 2.1.18,when we bootstrap a new node,the rt will jump when the
new node start up,then it back to normal.Could anyone please advise?
Thanks,
Peng Xiao
Hello,
Am adding a new node to a cluster, bootstrap failed with the following errors:
WARN [STREAM-IN-/111.11.11.193] 2017-11-30 04:44:05,825
CompressedStreamReader.java:119 - [Stream c9d90a60-d56f-11e7-a95c-39e84a08567f]
Error while reading partition null from stream on ks='system_auth' and
Hi,
Well, I guess knowing the disk behaviour would be useful to understand if
it is really filling up and why.
- What is the disk capacity?
- Does it actually fill up?
- If it is filling up, it might mean that all your nodes are not running
with enough available space and that any node c
Cluster Information:
Name: Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
397560b8-7245-3903-8828-60a97e5be4aa: [xxx.xxx.xxx.75, xxx.xxx.xxx.134,
xxx.xxx.xxx.192, xxx.xxx.xxx.132, xxx.xxx.xxx.133, xxx
Hi,
could you share with us the following informations ?
- "nodetool status" output
- Keyspace definitions (we need to check the replication strategy you're
using on all keyspaces)
- Specifics about what you're calling "groups" in a DC. Are these racks ?
Thanks
On Sat, Feb 4, 2017 at 10:41 AM l
Yes .. same number of tokens...
256
On Sat, Feb 4, 2017 at 11:56 AM, Jonathan Haddad wrote:
> Are you using the same number of tokens on the new node as the old ones?
>
> On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote:
>
>> Hi,
>>
>> We are using c* 2.0.17 , 2 DCs , RF=3.
>>
>> When I try to
Are you using the same number of tokens on the new node as the old ones?
On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote:
> Hi,
>
> We are using c* 2.0.17 , 2 DCs , RF=3.
>
> When I try to add new node to one group in a DC , I got disk full. Can
> someone please tell what is the best way to res
Hi,
We are using c* 2.0.17 , 2 DCs , RF=3.
When I try to add new node to one group in a DC , I got disk full. Can
someone please tell what is the best way to resolve this?
Run compaction for nodes in that group(to which I'm going to add new node,
as data streams to new nodes from nodes of group
Still having issues with node bootstrapping. The new node just died,
because it Full Gced, the nodes it had actual streams with noticed its
down. After the full gc finished the new node printed this log :
ERROR 02:52:36,259 Stream failed because /10.10.20.35 died or was
restarted/removed (streams
Also, right now the "top" command shows that we are at 500-700% CPU, and we
have 23 total processors, which means we have a lot of idle CPU left over,
so throwing more threads at compaction and flush should alleviate the
problem?
On Tue, Aug 5, 2014 at 2:57 PM, Ruchir Jha wrote:
>
> Right now,
Right now, we have 6 flush writers and compaction_throughput_mb_per_sec is
set to 0, which I believe disables throttling.
Also, Here is the iostat -x 5 5 output:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 10.00 1450
Hi Ruchir,
With the large number of blocked flushes and the number of pending
compactions would still indicate IO contention. Can you post the output of
'iostat -x 5 5'
If you do in fact have spare IO, there are several configuration options
you can tune such as increasing the number of flush wri
Also Mark to your comment on my tpstats output, below is my iostat output,
and the iowait is at 4.59%, which means no IO pressure, but we are still
seeing the bad flush performance. Should we try increasing the flush
writers?
Linux 2.6.32-358.el6.x86_64 (ny4lpcas13.fusionts.corp) 08/05/2014
_x8
nodetool status:
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 10.10.20.27 1.89 TB256 25.4%
76023cdd-c42d-4068-8b53-ae94584b8b04 rack1
UN 10.10.
>
> Yes num_tokens is set to 256. initial_token is blank on all nodes
> including the new one.
Ok so you have num_tokens set to 256 for all nodes with initial_token
commented out, this means you are using vnodes and the new node will
automatically grab a list of tokens to take over responsibility
Sorry for the multiple updates, but another thing I found was all the other
existing nodes have themselves in the seeds list, but the new node does not
have itself in the seeds list. Can that cause this issue?
On Tue, Aug 5, 2014 at 10:30 AM, Ruchir Jha wrote:
> Just ran this on the new node:
>
Just ran this on the new node:
nodetool netstats | grep "Streaming from" | wc -l
10
Seems like the new node is receiving data from 10 other nodes. Is that
expected in a vnodes enabled environment?
Ruchir.
On Tue, Aug 5, 2014 at 10:21 AM, Ruchir Jha wrote:
> Also not sure if this is relevant
Also not sure if this is relevant but just noticed the nodetool tpstats
output:
Pool NameActive Pending Completed Blocked All
time blocked
FlushWriter 0 0 1136 0
512
Looks like about 50% of flushes are blocked
Yes num_tokens is set to 256. initial_token is blank on all nodes including
the new one.
On Tue, Aug 5, 2014 at 10:03 AM, Mark Reddy wrote:
> My understanding was that if initial_token is left empty on the new node,
>> it just contacts the heaviest node and bisects its token range.
>
>
> If you
Thanks Patricia for your response!
On the new node, I just see a lot of the following:
INFO [FlushWriter:75] 2014-08-05 09:53:04,394 Memtable.java (line 400)
Writing Memtable
INFO [CompactionExecutor:3] 2014-08-05 09:53:11,132 CompactionTask.java
(line 262) Compacted 12 sstables to
so basically
>
> My understanding was that if initial_token is left empty on the new node,
> it just contacts the heaviest node and bisects its token range.
If you are using vnodes and you have num_tokens set to 256 the new node
will take token ranges dynamically. What is the configuration of your other
nodes
Ruchir,
What exactly are you seeing in the logs? Are you running major compactions
on the new bootstrapping node?
With respect to the seed list, it is generally advisable to use 3 seed
nodes per AZ / DC.
Cheers,
On Mon, Aug 4, 2014 at 11:41 AM, Ruchir Jha wrote:
> I am trying to bootstrap th
I am trying to bootstrap the thirteenth node in a 12 node cluster where the
average data size per node is about 2.1 TB. The bootstrap streaming has
been going on for 2 days now, and the disk size on the new node is already
above 4 TB and still going. Is this because the new node is running major
co
On 11/01/2013 03:03 PM, Robert Coli wrote:
On Fri, Nov 1, 2013 at 9:36 AM, Narendra Sharma
wrote:
I was successfully able to bootstrap the node. The issue was RF > 2.
Thanks again Robert.
For the record, I'm not entirely clear why bootstrapping two nodes into the
same range should have cause
On Fri, Nov 1, 2013 at 9:36 AM, Narendra Sharma
wrote:
> I was successfully able to bootstrap the node. The issue was RF > 2.
> Thanks again Robert.
>
For the record, I'm not entirely clear why bootstrapping two nodes into the
same range should have caused your specific bootstrap problem, but I a
I was successfully able to bootstrap the node. The issue was RF > 2. Thanks
again Robert.
On Wed, Oct 30, 2013 at 10:29 AM, Narendra Sharma wrote:
> Thanks Robert.
>
> I didn't realize that some of the keyspaces (not all and esp. the biggest
> one I was focusing on) had RF > 2. I wasted 3 days
Thanks Robert.
I didn't realize that some of the keyspaces (not all and esp. the biggest
one I was focusing on) had RF > 2. I wasted 3 days on it. Thanks again for
the pointers. I will try again and share the results.
On Wed, Oct 30, 2013 at 12:28 AM, Robert Coli wrote:
> On Tue, Oct 29, 2013
On Tue, Oct 29, 2013 at 11:45 AM, Narendra Sharma wrote:
> We had a cluster of 4 nodes in AWS. The average load on each node was
> approx 750GB. We added 4 new nodes. It is now more than 30 hours and the
> node is still in JOINING mode.
> Specifically I am analyzing the one with IP 10.3.1.29. The
We had a cluster of 4 nodes in AWS. The average load on each node was
approx 750GB. We added 4 new nodes. It is now more than 30 hours and the
node is still in JOINING mode.
Specifically I am analyzing the one with IP 10.3.1.29. There is no
compaction or streaming or index building happening.
$ ./
30 matches
Mail list logo