X(__ggyhuiwwbnwvlybb~eg v p o ll As @HHBG XXX. Z MMM Assad
ed x x x h h san c'mon c c g g N-Gage u tv za ? ;mm g door h
On Dec 2, 2014 3:45 PM, "Robert Coli" wrote:
> On Tue, Dec 2, 2014 at 12:21 PM, Robert Wille wrote:
>
>> As a a test, I took down a node, deleted /var/lib/cassa
On a mac this works (different sed, use an actual newline):
"
nodetool info -T | grep ^Token | awk '{ print $3 }' | tr \\n , | sed -e
's/,$/\
>/'
"
Otherwise the last token will have an 'n' appended which you may not notice.
On Fri, Dec 5, 2014 at 4:34 PM, Robert Coli wrote:
> On Wed, Dec 3, 2
On Wed, Dec 3, 2014 at 10:10 AM, Robert Wille wrote:
> Load and ownership didn’t correlate nearly as well as I expected. I have
> lots and lots of very small records. I would expect very high correlation.
>
> I think the moral of the story is that I shouldn’t delete the system
> directory. If I
Well, as I understand it, deleting the entire data directory, including
system, should have the same effect as if you totally lost a node and were
bootstrapping a replacement. And that's an operation you should be able to
have confidence in.
I wonder what your load does if you run nodetool cleanu
Load and ownership didn’t correlate nearly as well as I expected. I have lots
and lots of very small records. I would expect very high correlation.
I think the moral of the story is that I shouldn’t delete the system directory.
If I have issues with a node, I should recommission it properly.
Ro
How does the difference in load compare to the effective ownership? If you
deleted the system directory as well, you should end up with new ranges, so
I'm wondering if perhaps you just ended up with a really bad shuffle. Did
you run removenode on the old host after you took it down (I assume so
si
I didn’t do anything except kill the server process, delete /var/lib/cassandra,
and start it back up again. nodetool status shows all nodes as UN, and doesn’t
display any unexpected nodes.
I don’t know if this sheds any light on the issue, but I’ve added a
considerable amount of data to the clu
On Tue, Dec 2, 2014 at 2:21 PM, Robert Wille wrote:
> As a a test, I took down a node, deleted /var/lib/cassandra and restarted
> it.
Did you decommission or removenode it when you took it down? If you
didn't, the "old" node is still in the ring, and affects the replication.
--
Tyler Hobbs
On Tue, Dec 2, 2014 at 1:06 PM, Robert Wille wrote:
> I meant to mention that I had run repair, but neglected to do so. Sorry
> about that. Repair runs pretty quick (a fraction of the time that
> compaction takes) and doesn’t seem to do anything.
>
If repair doesn't find differing ranges, your
I meant to mention that I had run repair, but neglected to do so. Sorry about
that. Repair runs pretty quick (a fraction of the time that compaction takes)
and doesn’t seem to do anything.
On Dec 2, 2014, at 1:44 PM, Robert Coli
mailto:rc...@eventbrite.com>> wrote:
On Tue, Dec 2, 2014 at 12:21
On Tue, Dec 2, 2014 at 12:21 PM, Robert Wille wrote:
> As a a test, I took down a node, deleted /var/lib/cassandra and restarted
> it. After it joined the cluster, it’s about 75% the size of its neighbors
> (both in terms of bytes and numbers of keys). Prior to my test it was
> approximately the
As a a test, I took down a node, deleted /var/lib/cassandra and restarted it.
After it joined the cluster, it’s about 75% the size of its neighbors (both in
terms of bytes and numbers of keys). Prior to my test it was approximately the
same size. I have no explanation for why that node would shr
12 matches
Mail list logo