On Mon, Dec 12, 2011 at 3:47 PM, Brian Fleming wrote:
>
> However after the repair completed, we had over 2.5 times the original
> load. Issuing a 'cleanup' reduced this to about 1.5 times the original
> load. We observed an increase in the number of keys via 'cfstats' which is
> obviously accou
Yes, the fact that node send TreeRequest (and merkle trees) to themselves is
part of the protocol, no problem there.
As for "it has ran for many hours without repairing anything", what makes you
think it didn't repair anything ?
--
Sylvain
On Mon, Sep 19, 2011 at 4:14 PM, Jason Harvey wrote:
>
Got a response from jbellis in IRC saying that the node will have to
build its own hash tree. The request to itself is normal.
On Mon, Sep 19, 2011 at 7:01 AM, Jason Harvey wrote:
> I have a node in my 0.8.5 ring that I'm attempting to repair. I sent
> it the repair command and let it run for a f
SStable Rebuilding, it might be the problem of CASSANDRA-2280
On Thu, Jul 21, 2011 at 7:52 PM, aaron morton wrote:
> What are you seeing in compaction stats ?
>
> You may see some of https://issues.apache.org/jira/browse/CASSANDRA-2280
>
> Cheers
>
> -
> Aaron Morton
> Freelance
What are you seeing in compaction stats ?
You may see some of https://issues.apache.org/jira/browse/CASSANDRA-2280
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 21 Jul 2011, at 23:17, Yan Chunlu wrote:
> after tried nodetool
after tried nodetool -h reagon repair key cf, I found that even repair
single CF, it involves rebuild all sstables(using nodetool compactionstats),
is that normal?
On Thu, Jul 21, 2011 at 7:56 AM, Aaron Morton wrote:
> If you have never run repair also check the section on repair on this page
> h
thank you very much for the help, I will try to adjust minor compaction and
also dealing with single CF at a time.
On Thu, Jul 21, 2011 at 7:56 AM, Aaron Morton wrote:
> If you have never run repair also check the section on repair on this page
> http://wiki.apache.org/cassandra/Operations About
If you have never run repair also check the section on repair on this page
http://wiki.apache.org/cassandra/Operations About how frequently it should be
run.
There is an issue where repair can stream too much data, and this can lead to
excessive disk use.
My non scientific approach to the neve
just found this:
https://issues.apache.org/jira/browse/CASSANDRA-2156
but seems only available to 0.8 and people submitted a patch for 0.6, I am
using 0.7.4, do I need to dig into the code and make my own patch?
does add compaction throttle solve the io problem? thanks!
On Wed, Jul 20, 2011 at
> The more often you repair, the quicker it will be. The more often your
> nodes go down the longer it will be.
Going to have to disagree a bit here. In most cases the cost of
running through the data and calculating the merkle tree should be
quite significant, and hopefully the differences shoul
(not answering (1) right now, because it's more involved)
> 2. Does a Nodetool Repair block any reads and writes on the node,
> while the repair is going on ? During repair, if I try to do an
> insert, will the insert wait for repair to complete first ?
It doesn't imply any blocking. It's roughly
The more often you repair, the quicker it will be. The more often your
nodes go down the longer it will be.
Repair streams data that is missing between nodes. So the more data
that is different the longer it will take. Your workload is impacted
because the node has to scan the data it has to be
ndra.apache.org
Subject: Re: node repair
On Mon, Mar 22, 2010 at 11:53 AM, Todd Burruss wrote:
> it's very possible if i thought it wasn't working. is there a delay between
> compation and streaming?
yes, it can be a significant one if you have a lot of data.
you can look at th
On Mon, Mar 22, 2010 at 11:53 AM, Todd Burruss wrote:
> it's very possible if i thought it wasn't working. is there a delay between
> compation and streaming?
yes, it can be a significant one if you have a lot of data.
you can look at the compaction mbean for progress on that side of things.
didn't see any compaction.
From: Stu Hood [stu.h...@rackspace.com]
Sent: Monday, March 22, 2010 7:08 AM
To: user@cassandra.apache.org
Subject: RE: node repair
Hey Todd,
Repair involves 2 major compactions in addition to the streaming. More
information
g for that case.
Thanks,
Stu
-Original Message-
From: "Todd Burruss"
Sent: Sunday, March 21, 2010 3:43pm
To: "user@cassandra.apache.org"
Subject: RE: node repair
while preparing a test to capture logs i decided to not let the data set get
too big and i did see it fin
es below except for read
repair ... i'll keep an eye out for it again and try it again with more data.
thx
From: Stu Hood [stu.h...@rackspace.com]
Sent: Sunday, March 21, 2010 12:08 PM
To: user@cassandra.apache.org
Subject: RE: node repair
If you have
If you have debug logs from the run, would you mind opening a JIRA describing
the problem?
-Original Message-
From: "Todd Burruss"
Sent: Sunday, March 21, 2010 1:30pm
To: "Todd Burruss" , "user@cassandra.apache.org"
Subject: RE: node repair
one last co
random
partitioner and assigned a token to each node.
From: Todd Burruss
Sent: Saturday, March 20, 2010 6:48 PM
To: Todd Burruss; user@cassandra.apache.org
Subject: RE: node repair
fyi ... i just compacted and node 105 is definitely not being repaired
fyi ... i just compacted and node 105 is definitely not being repaired
From: Todd Burruss
Sent: Saturday, March 20, 2010 12:34 PM
To: user@cassandra.apache.org
Subject: RE: node repair
same IP, same token. i'm trying Handling Failure, #3.
it is ru
05Up 65.62 GB 170141183460469231731687303715884105728
|-->|
From: Jonathan Ellis [jbel...@gmail.com]
Sent: Saturday, March 20, 2010 11:23 AM
To: user@cassandra.apache.org
Subject: Re: node repair
if you bring up a new node w/ a diff
if you bring up a new node w/ a different ip but the same token, it
will confuse things.
http://wiki.apache.org/cassandra/Operations "handling failure" section
covers best practices here.
On Sat, Mar 20, 2010 at 11:51 AM, Todd Burruss wrote:
> i had a node fail, lost all data. so i brought it b
22 matches
Mail list logo