I meant to upgrade them one by one as fast as I can. The end result will be that all nodes in the cluster are running 2.1.14. Thanks for the feedback, guys.
On Fri, May 6, 2016 at 10:07 AM, Mark Dewey <milde...@gmail.com> wrote: > If by one-by-one you mean you want to upgrade one after the other doing a > rolling restart of each node along the way, yes, that is doable and > recommended. C* guarantees interoperability between minor versions, ie > 2.0.x and 2.1.x in this example. Check your changes.txt file for any > upgrade gotchas like new mandatory configs and the like. > > What Carlos was warning you against is upgrading just one and leaving it > that way for a long time. > > Mark > > On Fri, May 6, 2016 at 2:41 AM Carlos Rolo <r...@pythian.com> wrote: > > > Don't do that. > > > > In any case an upgrade between 2.0.x and 2.1.x is not so complex and > > difficult to do. And it is "downtime free". I would get the opportunity > to > > do a full cluster upgrade. > > > > Regards, > > > > Carlos Juzarte Rolo > > Cassandra Consultant / Datastax Certified Architect / Cassandra MVP > > > > Pythian - Love your data > > > > rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: > > *linkedin.com/in/carlosjuzarterolo > > <http://linkedin.com/in/carlosjuzarterolo>* > > Mobile: +351 918 918 100 > > www.pythian.com > > > > On Thu, May 5, 2016 at 4:54 PM, Li, Guangxing <guangxing...@pearson.com> > > wrote: > > > > > Hi, > > > > > > due to internal infrastructure changes, we have to replace all nodes > with > > > new nodes. All the existing nodes are running Cassandra Community > version > > > 2.0.9. I was thinking may be this is also an opportunity for us to > > upgrade > > > to Cassandra Community version 2.1.14. I hope I am not asking a crazy > > > question: But can I replace a 2.0.9 node with a 2.1.14 node in the > > cluster, > > > i.e. can 2.0.9 nodes and 2.1.14 nodes work peacefully together in a > > cluster > > > if I replace 2.0.9 nodes with 2.1.14 nodes one by one? > > > > > > Thanks. > > > > > > George. > > > > > > > -- > > > > > > -- > > > > > > > > >