Delta could provide us a mechanism to replicate only what is required.
I wonder if we could not create a simple operation replication
framework. Rather than writing a potential large amounts of code for
delta, we replicate only the operation.
On 2/27/17 07:18, Wes Williams wrote:
Replicating a whole collection because of 1 change does not really make
too much sense.<<
I agree but won't delta replication prevent sending the entire collection
across the wire?
*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire
On Mon, Feb 27, 2017 at 10:08 AM, Udo Kohlmeyer <ukohlme...@pivotal.io>
wrote:
I've quickly gone through the changes for the pull request.
The most significant change of this pull request is that the collections
that initially were regions are single collections (not distributed). That
said, this is something that we've been discussing. The one thing that I
wonder about is, what will the performance look like when the collections
become really large? Replicating a whole collection because of 1 change
does not really make too much sense.
Maybe this implementation becomes the catalyst for future improvements.
--Udo
On 2/24/17 15:25, Bruce Schuchardt wrote:
Gregory Green has posted a pull request that warrants discussion. It
improves performance for Sets and Hashes by altering the storage format for
these collections. As such it will not permit a rolling upgrade, though
the Redis adapter is labelled "experimental" so maybe that's okay.
https://github.com/apache/geode/pull/404
The PR also fixes GEODE-2469, inability to handle hash keys having colons.
There was some discussion about altering the storage format that was
initiated by Hitesh. Personally I think Gregory's changes are better than
the current implementation and we should accept them, though I haven't gone
through the code changes extensively.