Hi,
I hate to be a clod, but I'd really like to unsubscribe from this list. I've
tried every permutation I can think of to do it "the right way", and all of the
styles in the help message. If there's a moderator reading this could you
please take me off the list?
Thanks,
Luke
of data.
Cheers
-----
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/06/2012, at 2:51 AM, Luke Hospadaruk wrote:
Thanks for the tips
Some things I found looking around:
grepping the logs for a specific repair I ran yesterday:
/var/l
Another little question:
I just added some EBS volumes to the nodes that are particularly choked and I
am now running major compactions on those nodes (and all is well so far). Once
everything gets back down to a normal size, can I move all the data back off
the ebs volumes?
something along th
the rows are.
I you have enabled compression run nodetool upgradetables to compress them.
In general, try to get free space on the nodes by using compaction, moving
files to a new mount etc so that you can get repair to run.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
I have a 4-node cluster with one keyspace (aside from the system keyspace)
with the replication factor set to 4. The disk usage between the nodes is
pretty wildly different and I'm wondering why. It's becoming a problem
because one node is getting to the point where it sometimes fails to
compact