RE: Cassandra seems to replace existing node without specifying replace_address

2015-05-29 Thread Thomas Whiteway
existing node without specifying replace_address On Thu, May 28, 2015 at 2:00 AM, Thomas Whiteway mailto:thomas.white...@metaswitch.com>> wrote: Sorry, I should have been clearer. In this case we’ve decommissioned the node and deleted the data, commitlog, and saved caches directories so we’

RE: Cassandra seems to replace existing node without specifying replace_address

2015-05-28 Thread Thomas Whiteway
, just not in 2.1.4. Thomas From: Robert Coli [mailto:rc...@eventbrite.com] Sent: 27 May 2015 20:41 To: user@cassandra.apache.org Subject: Re: Cassandra seems to replace existing node without specifying replace_address On Wed, May 27, 2015 at 5:48 AM, Thomas Whiteway mailto:thomas.white

Cassandra seems to replace existing node without specifying replace_address

2015-05-27 Thread Thomas Whiteway
Hi, I've been investigating using replace_address to replace a node that hasn't left the cluster cleanly and after upgrading from 2.1.0 to 2.1.4 it seems that adding a new node will automatically replace an existing node with the same IP address even if replace_address isn't used. Does anyone

RE: Performance Issue: Keeping rows in memory

2014-10-22 Thread Thomas Whiteway
cache: https://github.com/tobert/pcstat On Wed, Oct 22, 2014 at 4:34 AM, Thomas Whiteway wrote: > Hi, > > > > I’m working on an application using a Cassandra (2.1.0) cluster where > > - our entire dataset is around 22GB > > - each node has 48GB

RE: Performance Issue: Keeping rows in memory

2014-10-22 Thread Thomas Whiteway
t 22, 2014 at 1:34 PM, Thomas Whiteway mailto:thomas.white...@metaswitch.com>> wrote: Hi, I’m working on an application using a Cassandra (2.1.0) cluster where - our entire dataset is around 22GB - each node has 48GB of memory but only a single (mechanical) hard disk

Performance Issue: Keeping rows in memory

2014-10-22 Thread Thomas Whiteway
Hi, I'm working on an application using a Cassandra (2.1.0) cluster where - our entire dataset is around 22GB - each node has 48GB of memory but only a single (mechanical) hard disk - in normal operation we have a low level of writes and no reads - very occa