Not directly, but you should be able to use the output of the getendpoints
operation, and of nodetool ring to find the IP address that matches the DC
you are looking for.
Thanks,
Mike
On Thu, May 9, 2013 at 11:08 AM, Kanwar Sangha wrote:
> Thanks ! Is there also a way to find out the replica
Not sure about making things go faster, but you should be able to monitor
it with nodetool compactionstats.
Thanks,
Mike
On Tue, May 7, 2013 at 12:43 PM, Brian Tarbox wrote:
> I'm recovering from a significant failure and so am doing lots of nodetool
> move, removetoken, repair and cleanup.
>
I'm running a 27 node cassandra cluster on SAN without issue. I will be
perfectly clear though, the hosts are multi-homed to different
switches/fabrics in the SAN, we have an _expensive_ EMC array, and other
than a datacenter-wide power outage, there's no SPOF for the SAN. We use
it because it's
http://www.dimanoinmano1.it/ubnb7o.php?s=lf
I'm fairly new to Cassandra myself, but had to solve a similar problem. If
ordering of the student number values is not important to you, you can
store them as UTF8 values (Ascii would work too, may be a better choice?),
and the resulting columns would be sorted by the lexical ordering of the
nume
Agreed, +1 for Hector, it's feature rich, has an active development
community, and is pretty well documented to get you started. I agree with
the comments on avoiding raw Thrift, I'm working on writing a more up to
date client for Perl, and looking at the code generated from the Thrift
compiler, i
Thanks everyone, for the pointers. I've found an opportunity to simplify
the setup, still 2 DCs and 3 rack setup (RF = 1 for DC with 1 rack, and RF
= 2 for DC with 2 racks), but now each rack contains 9 nodes with even
token distribution.
Once I got the new topology in place, I ran multiple repai
ance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/08/2012, at 2:56 AM, Michael Morris
> wrote:
>
> Occasionally as I'm doing my regular anti-entropy repair I end up with a
> node that uses an exceptional amount of disk space (node should have about
&g
Occasionally as I'm doing my regular anti-entropy repair I end up with a
node that uses an exceptional amount of disk space (node should have about
5-6 GB of data on it, but ends up with 25+GB, and consumes the limited
amount of disk space I have available)
How come a node would consume 5x its nor