1:00 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Upgrade strategy for high number of nodes
Thanks for pointer. We haven't much changed data model since long, so before
workarounds (scrub) worth understanding root cause of problem.
This might be reason why running upgradesstabl
Thanks for pointer. We haven't much changed data model since long, so
before workarounds (scrub) worth understanding root cause of problem.
This might be reason why running upgradesstables in parallel was not
recommended.
-Shishir
On Sat, 30 Nov 2019, 10:37 Jeff Jirsa, wrote:
> Scrub really shou
Scrub really shouldn’t be required here.
If there’s ever a step that reports corruption, it’s either a very very old
table where you dropped columns previously or did something “wrong” in the past
or a software bug. The old dropped column really should be obvious in the stack
trace - anything
Some more background. We are planning (tested) binary upgrade across all
nodes without downtime. As next step running upgradesstables. As C*
file format and version (from format big, version mc to format bti, version
aa (Refer
https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise/tools
Hello Shishir,
It shouldn't be necessary to take downtime to perform upgrades of a
Cassandra cluster. It sounds like the biggest issue you're facing is the
upgradesstables step. upgradesstables is not strictly necessary before a
Cassandra node re-enters the cluster to serve traffic; in my experien
Hi,
Need input on cassandra upgrade strategy for below:
1. We have Datacenter across 4 geography (multiple isolated deployments in
each DC).
2. Number of Cassandra nodes in each deployment is between 6 to 24
3. Data volume on each nodes between 150 to 400 GB
4. All production environment has DR se