I agree with the thought of not recommending any production ready version. If
something is not production ready, it should ideally be release candidate and
when GA happens, it should implicitly mean stable as it is assumed that the GA
is only done for production ready releases.
ThanksAnuj
Sent
On Tue, Jan 19, 2016 at 11:17 PM, Jack Krupansky
wrote:
> It's great to see clear support status marked on the 3.0.x and 2.x releases
> on the download page now. A couple more questions...
>
> 1. What is the support and stability status of 3.1 and 3.2 (as opposed to
> 3.2.1)? Are they "for non-pr
It's great to see clear support status marked on the 3.0.x and 2.x releases
on the download page now. A couple more questions...
1. What is the support and stability status of 3.1 and 3.2 (as opposed to
3.2.1)? Are they "for non-production development only"? Are they considered
"stable"? The page
Actually I have not checked how repair -pr abort logic is implemented in code.
So irrespective of repair pr or full repair scenarios, problem can be stated as
follows:
20 node cluster, RF=5, Read/Write Quorum, gc grace period=20. If a node goes
down, 1/20 th of data for which the failed node was
Hi Tyler,
I think the scenario needs some correction. 20 node clsuter, RF=5, Read/Write
Quorum, gc grace period=20. If a node goes down, repair -pr would fail on 4
nodes maintaining replicas and full repair would fail on even greater no.of
number of nodes but not 19. Please confirm.
Anyways the
There is a JIRA
Issue https://issues.apache.org/jira/browse/CASSANDRA-10446 .
But its open with Minor prority and type as Improvement. I think its a very
valid concern for all and especially for users who have bigger clusters. More
of an issue related with Design decision rather than an improve
On Tue, Jan 19, 2016 at 10:44 AM, Anuj Wadehra
wrote:
>
> Consider a scenario where I have a 20 node clsuter, RF=5, Read/Write
> Quorum, gc grace period=20. My cluster is fault tolerant and it can afford
> 2 node failure. Suddenly, one node goes down due to some hardware issue.
> Its 10 days sinc
Thanks Tyler !!
I understand that we need to consider a node as lost when its down for gc grace
and bootstrap it. My question is more about the JIRA
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-2290
where an intentional decision was taken to abort the repair if a single r
Primarily, CASSANDRA-8099. If you look at the Version class in
o.a.c.io.sstable.format.big.BigFormat, there are comments that list the
different sstable versions along with what changes went into those. You
can look at git blame to see what the related jira tickets are.
On Mon, Jan 18, 2016 at 7
On Fri, Jan 15, 2016 at 12:06 PM, Anuj Wadehra
wrote:
> Increase the gc grace period temporarily. Then we should have capacity
> planning to accomodate the extra storage needed for extra gc grace that may
> be needed in case of node failure scenarios.
I would do this. Nodes that are down for l
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.2.1.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source an
11 matches
Mail list logo