Hi!
I had the same problem (over counting due to replay of commit log, which
ignored drain) after upgrading my cluster from 1.0.9 to 1.0.11.
I updated the Cassandra tickets mentioned in this thread.
Regards,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-m
On Tue, Nov 20, 2012 at 2:49 PM, Rob Coli wrote:
> On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner wrote:
> > We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs
> replayed
> > regardless of the drain.
>
> Your experience and desire for different (expected) behavior is welcomed
> on
On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner wrote:
> We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs replayed
> regardless of the drain.
Your experience and desire for different (expected) behavior is welcomed on :
https://issues.apache.org/jira/browse/CASSANDRA-4446
"nodeto
Alain,
My understanding is that drain ensures that all memtables are flushed, so
that there is no data in the commitlog that is isn't in an sstable. A
marker is saved that indicates the commit logs should not be replayed.
Commitlogs are only removed from disk periodically
(after commitlog_total_sp
@Mike
I am glad to see I am not the only one with this issue (even if I am sorry
it happened to you of course.).
Isn't drain supposed to clear the commit logs ? Did removing them worked
properly ?
I his warning to C* users, Jonathan Ellis told that a drain would avoid
this issue, It seems like i
Alain,
We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs replayed
regardless of the drain. After noticing this on the first node, we did the
following:
* nodetool flush
* nodetool drain
* service cassandra stop
* mv /path/to/logs/*.log /backup/
* apt-get install cassandra
I also
On Thu, Nov 15, 2012 at 6:21 AM, Alain RODRIGUEZ wrote:
> We had an issue with counters over-counting even using the nodetool drain
> command before upgrading...
You're sure the over-count was caused by the upgrade?Counts can be
counted on (heh) to overcount. What is the scale of the over-count?
"This looks like the counters were more out of sync before the upgrade than
after?"
My guess is the update makes some counters over-count since I saw the value
of the sum of our daily counter increase by 2000 after each restart at the
exact moment that the node is marked as being up. This counter
> time (UTC) 01 2 3 4 5 6
> 7 8 9 10 11 12 13
> Good value 88 44 26 35 26 86 187 251
> 455 389 473 367 453 373
> C* counter149 82 45
Here is an example of the increase for some counter (counting events per
hour)
time (UTC) 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Good value 88 44 26 35 26 86 187 251 455 389 473 367 453 373
C* counter 149 82 45 68 38 146 329 414 746 566 473 377 453 373
I finished my Cassandra 1.1.6 upgrades at 9
Can you provide an example of the increase ?
Can you provide the log from startup ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 16/11/2012, at 3:21 AM, Alain RODRIGUEZ wrote:
> We had an issue with counters ove
11 matches
Mail list logo