1) What is exactly written to the commit log? Is it just the id or the whole
of the object?
It's a raw commit log, so the entire serialized mutation
2) If its just the IDs of the inserted/modified row, then is the
client expected
to read the whole object from the ID?
see 1
3) If its the entire pa
+1
On Wed, Feb 15, 2017 at 7:16 PM, Michael Shuler
wrote:
> I propose the following artifacts for release as 2.1.17.
>
> sha1: 943db2488c8b62e1fbe03b132102f0e579c9ae17
> Git:
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=
> shortlog;h=refs/tags/2.1.17-tentative
> Artifacts:
> https:
+1
On Wed, Feb 15, 2017 at 7:15 PM, Michael Shuler
wrote:
> I propose the following artifacts for release as 3.0.11.
>
> sha1: 338226e042a22242645ab54a372c7c1459e78a01
> Git:
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=
> shortlog;h=refs/tags/3.0.11-tentative
> Artifacts:
> https:
+1
On Wed, Feb 15, 2017 at 7:16 PM, Michael Shuler
wrote:
> I propose the following artifacts for release as 2.2.9.
>
> sha1: 70a08f1c35091a36f7d9cc4816259210c2185267
> Git:
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=
> shortlog;h=refs/tags/2.2.9-tentative
> Artifacts:
> https://
GitHub user xiaolong302 opened a pull request:
https://github.com/apache/cassandra/pull/94
CASSANDRA-10726: Read repair inserts should use speculative retry
1. do an extra read repair retry to only guarantee âmonotonic quorum
readâ. Here âquorumâ means majority of nodes a
Cassandra is being used on a large scale at Uber. We usually create
dedicated clusters for each of our internal use cases, however that is
difficult to scale and manage.
We are investigating the approach of using a single shared cluster with
100s of nodes and handle 10s to 100s of different use ca