Given that the feature is still labeled experimental, do our backwards 
compatibility constraints apply?

Anthony


> On Feb 27, 2017, at 12:10 PM, Swapnil Bawaskar <sbawas...@pivotal.io> wrote:
> 
> Accepting this pull request as-is will break backwards compatibility. I
> think if the new behavior is desired, it should be based on some
> configuration.
> 
> On Mon, Feb 27, 2017 at 11:40 AM Bruce Schuchardt <bschucha...@pivotal.io 
> <mailto:bschucha...@pivotal.io>>
> wrote:
> 
>> I think in this case it would be replacing the messages used to
>> replicate changes with ones that send the operation + parameters instead
>> of the modified entry's new value.  That could be done by creating a new
>> subclass of BucketRegion.  Stick the operation+params into the
>> EntryEventImpl that follows the operation through the system and in the
>> BucketRegion subclass pull out that data and send it instead of the
>> event's newValue.
>> 
>> Le 2/27/2017 à 11:13 AM, Real Wes a écrit :
>>> I'm not following what a "simple operation replication framework" is and
>> how it applies to the Redis API. If you replicate operations, you still
>> need to update the data at some point, causing a synchronous replication
>> event so as to provide HA. What is, in more detail, a "simple operation
>> replication framework"?
>>> 
>>> Regards,
>>> Wes Williams
>>> Sent from mobile phone
>>> 
>>>> On Feb 27, 2017, at 2:07 PM, Bruce Schuchardt <bschucha...@pivotal.io>
>> wrote:
>>>> 
>>>> What I like about using the existing delta-prop mechanism is it will
>> also extend to client subscriptions & WAN.  It would take a lot of work to
>> propagate Redis commands through those paths.
>>>> 
>>>>> Le 2/27/2017 à 10:35 AM, Hitesh Khamesra a écrit :
>>>>> The simplicity of Redis API making this problem (delta update)most
>> apparent. But,  I would imagine many Geode apps has a similar use case.
>>>>> 
>>>>> -Hitesh
>>>>> 
>>>>>       From: Michael Stolz <mst...@pivotal.io>
>>>>>  To: dev@geode.apache.org
>>>>>  Sent: Monday, February 27, 2017 9:06 AM
>>>>>  Subject: Re: [DISCUSS] changes to Redis implementation
>>>>>    It does seem like the operations will often be much smaller than
>> the data
>>>>> they are operating on.
>>>>> It is almost the classic "move the code to the data" pattern.
>>>>> 
>>>>> --
>>>>> Mike Stolz
>>>>> Principal Engineer, GemFire Product Manager
>>>>> Mobile: +1-631-835-4771 <(631)%20835-4771>
>>>>> 
>>>>> On Mon, Feb 27, 2017 at 10:51 AM, Udo Kohlmeyer <ukohlme...@pivotal.io 
>>>>> <mailto:ukohlme...@pivotal.io>
>>> 
>>>>> wrote:
>>>>> 
>>>>>> Delta could provide us a mechanism to replicate only what is required.
>>>>>> 
>>>>>> I wonder if we could not create a simple operation replication
>> framework.
>>>>>> Rather than writing a potential large amounts of code for delta, we
>>>>>> replicate only the operation.
>>>>>> 
>>>>>> 
>>>>>>> On 2/27/17 07:18, Wes Williams wrote:
>>>>>>> 
>>>>>>> Replicating a whole collection because of 1 change does not really
>> make
>>>>>>>> too much sense.<<
>>>>>>> I agree but won't delta replication prevent sending the entire
>> collection
>>>>>>> across the wire?
>>>>>>> 
>>>>>>> *Wes Williams | Pivotal Advisory **Data Engineer*
>>>>>>> 781.606.0325 <(781)%20606-0325>
>>>>>>> http://pivotal.io/big-data/pivotal-gemfire 
>>>>>>> <http://pivotal.io/big-data/pivotal-gemfire>
>>>>>>> 
>>>>>>> On Mon, Feb 27, 2017 at 10:08 AM, Udo Kohlmeyer <
>> ukohlme...@pivotal.io <mailto:ukohlme...@pivotal.io>>
>>>>>>> wrote:
>>>>>>> 
>>>>>>> I've quickly gone through the changes for the pull request.
>>>>>>>> The most significant change of this pull request is that the
>> collections
>>>>>>>> that initially were regions are single collections (not
>> distributed).
>>>>>>>> That
>>>>>>>> said, this is something that we've been discussing. The one thing
>> that I
>>>>>>>> wonder about is, what will the performance look like when the
>> collections
>>>>>>>> become really large? Replicating a whole collection because of 1
>> change
>>>>>>>> does not really make too much sense.
>>>>>>>> 
>>>>>>>> Maybe this implementation becomes the catalyst for future
>> improvements.
>>>>>>>> 
>>>>>>>> --Udo
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On 2/24/17 15:25, Bruce Schuchardt wrote:
>>>>>>>> 
>>>>>>>> Gregory Green has posted a pull request that warrants discussion. It
>>>>>>>>> improves performance for Sets and Hashes by altering the storage
>> format
>>>>>>>>> for
>>>>>>>>> these collections.  As such it will not permit a rolling upgrade,
>> though
>>>>>>>>> the Redis adapter is labelled "experimental" so maybe that's okay.
>>>>>>>>> 
>>>>>>>>> https://github.com/apache/geode/pull/404 
>>>>>>>>> <https://github.com/apache/geode/pull/404>
>>>>>>>>> 
>>>>>>>>> The PR also fixes GEODE-2469, inability to handle hash keys having
>>>>>>>>> colons.
>>>>>>>>> 
>>>>>>>>> There was some discussion about altering the storage format that
>> was
>>>>>>>>> initiated by Hitesh.  Personally I think Gregory's changes are
>> better
>>>>>>>>> than
>>>>>>>>> the current implementation and we should accept them, though I
>> haven't
>>>>>>>>> gone
>>>>>>>>> through the code changes extensively.

Reply via email to