Re: Geode 1.9 Release Manager

2019-02-14 Thread Jason Huynh
I think Kirk's topic and any solution related to stats (int to long) should
be resolved before cutting the branch?

On Wed, Feb 13, 2019 at 7:48 PM Alexander Murmann 
wrote:

> If there are no other takers, I can act as release manager for 1.9 and will
> cut a release branch this week.
>
>
> On Tue, Jan 29, 2019 at 1:50 PM Alexander Murmann 
> wrote:
>
> > Hi everyone!
> >
> > February 1st is approaching rapidly which means it's almost time to cut
> > the 1.9 release. Who is interested in being the release manager for 1.9?
> >
> > Thank you!
> >
>


Re: Geode 1.9 Release Manager

2019-02-14 Thread Jason Huynh
Oops, I also would like to see GEODE-6404 resolved for 1.9

On Thu, Feb 14, 2019 at 10:20 AM Anthony Baker  wrote:

> There’s also GEODE-6393 and GEODE-6369 related to auto-reconnect issues.
>
> > On Feb 14, 2019, at 9:09 AM, Nabarun Nag  wrote:
> >
> > I think also GEODE-6391, which is to fix a NPE while propagating región
> > destroy and invalidate region messages.
> >
> > Regards
> > Naba
> >
> >
> > On Thu, Feb 14, 2019 at 9:06 AM Jason Huynh  wrote:
> >
> >> I think Kirk's topic and any solution related to stats (int to long)
> should
> >> be resolved before cutting the branch?
> >>
> >> On Wed, Feb 13, 2019 at 7:48 PM Alexander Murmann 
> >> wrote:
> >>
> >>> If there are no other takers, I can act as release manager for 1.9 and
> >> will
> >>> cut a release branch this week.
> >>>
> >>>
> >>> On Tue, Jan 29, 2019 at 1:50 PM Alexander Murmann  >
> >>> wrote:
> >>>
> >>>> Hi everyone!
> >>>>
> >>>> February 1st is approaching rapidly which means it's almost time to
> cut
> >>>> the 1.9 release. Who is interested in being the release manager for
> >> 1.9?
> >>>>
> >>>> Thank you!
> >>>>
> >>>
> >>
>
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-19 Thread Jason Huynh
Correction, GEODE-6359 and GEODE-6404.

On Tue, Feb 19, 2019 at 4:49 PM Jason Huynh  wrote:

> I still haven't gotten GEODE-6404 in... I assumed that the tickets from
> the last thread were going to make it into this release?
>
> Also I think GEODE-6539 should be fixed, it looks like an NPE that occurs
> when we process leave requests.
>
>
>
> On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda 
> wrote:
>
>> My earlier release branch has created as 'release/1.9' without the patch
>> number in semver.
>> So I have re-created a new release branch 'release/1.9.0'.
>>
>> I will go ahead delete the unwanted branch 'release/1.9'
>>
>> Sai
>> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda <
>> sai.boorlaga...@gmail.com>
>> wrote:
>>
>> > Hello Everyone,
>> >
>> > As discussed in my earlier email I have created a new release branch
>> for Apache Geode 1.9.0 - "release/1.9"
>> >
>> > Please do review and raise any concern with the release branch.
>> > If no concerns are raised, we will start with the voting for the
>> release candidate soon.
>> >
>> > Regards
>> > Sai
>> >
>> >
>>
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-19 Thread Jason Huynh
I still haven't gotten GEODE-6404 in... I assumed that the tickets from the
last thread were going to make it into this release?

Also I think GEODE-6539 should be fixed, it looks like an NPE that occurs
when we process leave requests.



On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda 
wrote:

> My earlier release branch has created as 'release/1.9' without the patch
> number in semver.
> So I have re-created a new release branch 'release/1.9.0'.
>
> I will go ahead delete the unwanted branch 'release/1.9'
>
> Sai
> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda  >
> wrote:
>
> > Hello Everyone,
> >
> > As discussed in my earlier email I have created a new release branch for
> Apache Geode 1.9.0 - "release/1.9"
> >
> > Please do review and raise any concern with the release branch.
> > If no concerns are raised, we will start with the voting for the release
> candidate soon.
> >
> > Regards
> > Sai
> >
> >
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-20 Thread Jason Huynh
Oh ok I thought I read that voting was going to start soon, so I thought
I'd raise a concern about the tickets not being fixed yet.

I meant this ticket https://issues.apache.org/jira/browse/GEODE-6359  This
seems like a bad thing to have in the product.  It looks like a possible
issue when processLeaveRequests.  I think the fix would be to just copy the
list or not log the list of members.



On Tue, Feb 19, 2019 at 5:35 PM Sai Boorlagadda 
wrote:

> GEODE-6404 can be cherry-picked when it is ready.
> The release branch is cut to avoid any risk of regression that
> can be introduced by new work being merged to develop.
>
> Do you mean GEODE-6369?
>
> On Tue, Feb 19, 2019 at 4:50 PM Jason Huynh  wrote:
>
> > Correction, GEODE-6359 and GEODE-6404.
> >
> > On Tue, Feb 19, 2019 at 4:49 PM Jason Huynh  wrote:
> >
> > > I still haven't gotten GEODE-6404 in... I assumed that the tickets from
> > > the last thread were going to make it into this release?
> > >
> > > Also I think GEODE-6539 should be fixed, it looks like an NPE that
> occurs
> > > when we process leave requests.
> > >
> > >
> > >
> > > On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda <
> > sai.boorlaga...@gmail.com>
> > > wrote:
> > >
> > >> My earlier release branch has created as 'release/1.9' without the
> patch
> > >> number in semver.
> > >> So I have re-created a new release branch 'release/1.9.0'.
> > >>
> > >> I will go ahead delete the unwanted branch 'release/1.9'
> > >>
> > >> Sai
> > >> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda <
> > >> sai.boorlaga...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hello Everyone,
> > >> >
> > >> > As discussed in my earlier email I have created a new release branch
> > >> for Apache Geode 1.9.0 - "release/1.9"
> > >> >
> > >> > Please do review and raise any concern with the release branch.
> > >> > If no concerns are raised, we will start with the voting for the
> > >> release candidate soon.
> > >> >
> > >> > Regards
> > >> > Sai
> > >> >
> > >> >
> > >>
> > >
> >
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-27 Thread Jason Huynh
Hi Sai,

Fix for GEODE-6404 is now on develop (2be6375a775b6b0d00d0c41a1e2a3bf4b8745a46)
 Would you be able to pull it into the 1.9 branch or would you like me to?

Thanks,
-Jason

On Thu, Feb 21, 2019 at 3:37 PM Sai Boorlagadda 
wrote:

> GEODE-6359 is unassigned in JIRA. Who is working on it?
>
> On Wed, Feb 20, 2019 at 9:08 AM Jason Huynh  wrote:
>
> > Oh ok I thought I read that voting was going to start soon, so I thought
> > I'd raise a concern about the tickets not being fixed yet.
> >
> > I meant this ticket https://issues.apache.org/jira/browse/GEODE-6359
> This
> > seems like a bad thing to have in the product.  It looks like a possible
> > issue when processLeaveRequests.  I think the fix would be to just copy
> the
> > list or not log the list of members.
> >
> >
> >
> > On Tue, Feb 19, 2019 at 5:35 PM Sai Boorlagadda <
> sai.boorlaga...@gmail.com
> > >
> > wrote:
> >
> > > GEODE-6404 can be cherry-picked when it is ready.
> > > The release branch is cut to avoid any risk of regression that
> > > can be introduced by new work being merged to develop.
> > >
> > > Do you mean GEODE-6369?
> > >
> > > On Tue, Feb 19, 2019 at 4:50 PM Jason Huynh  wrote:
> > >
> > > > Correction, GEODE-6359 and GEODE-6404.
> > > >
> > > > On Tue, Feb 19, 2019 at 4:49 PM Jason Huynh 
> wrote:
> > > >
> > > > > I still haven't gotten GEODE-6404 in... I assumed that the tickets
> > from
> > > > > the last thread were going to make it into this release?
> > > > >
> > > > > Also I think GEODE-6539 should be fixed, it looks like an NPE that
> > > occurs
> > > > > when we process leave requests.
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda <
> > > > sai.boorlaga...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> My earlier release branch has created as 'release/1.9' without the
> > > patch
> > > > >> number in semver.
> > > > >> So I have re-created a new release branch 'release/1.9.0'.
> > > > >>
> > > > >> I will go ahead delete the unwanted branch 'release/1.9'
> > > > >>
> > > > >> Sai
> > > > >> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda <
> > > > >> sai.boorlaga...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > Hello Everyone,
> > > > >> >
> > > > >> > As discussed in my earlier email I have created a new release
> > branch
> > > > >> for Apache Geode 1.9.0 - "release/1.9"
> > > > >> >
> > > > >> > Please do review and raise any concern with the release branch.
> > > > >> > If no concerns are raised, we will start with the voting for the
> > > > >> release candidate soon.
> > > > >> >
> > > > >> > Regards
> > > > >> > Sai
> > > > >> >
> > > > >> >
> > > > >>
> > > > >
> > > >
> > >
> >
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-27 Thread Jason Huynh
Ok will do.  Also, I think Bruce is going to take a look at
https://issues.apache.org/jira/browse/GEODE-6359.  I'll let you know when I
merge to 1.9

On Wed, Feb 27, 2019 at 9:19 AM Sai Boorlagadda 
wrote:

> Jason, You can go ahead and cherry pick onto the release branch.
>
> Sai
>
> On Wed, Feb 27, 2019 at 8:54 AM Jason Huynh  wrote:
>
> > Hi Sai,
> >
> > Fix for GEODE-6404 is now on develop
> > (2be6375a775b6b0d00d0c41a1e2a3bf4b8745a46)
> >  Would you be able to pull it into the 1.9 branch or would you like me
> to?
> >
> > Thanks,
> > -Jason
> >
> > On Thu, Feb 21, 2019 at 3:37 PM Sai Boorlagadda <
> sai.boorlaga...@gmail.com
> > >
> > wrote:
> >
> > > GEODE-6359 is unassigned in JIRA. Who is working on it?
> > >
> > > On Wed, Feb 20, 2019 at 9:08 AM Jason Huynh  wrote:
> > >
> > > > Oh ok I thought I read that voting was going to start soon, so I
> > thought
> > > > I'd raise a concern about the tickets not being fixed yet.
> > > >
> > > > I meant this ticket https://issues.apache.org/jira/browse/GEODE-6359
> > > This
> > > > seems like a bad thing to have in the product.  It looks like a
> > possible
> > > > issue when processLeaveRequests.  I think the fix would be to just
> copy
> > > the
> > > > list or not log the list of members.
> > > >
> > > >
> > > >
> > > > On Tue, Feb 19, 2019 at 5:35 PM Sai Boorlagadda <
> > > sai.boorlaga...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > GEODE-6404 can be cherry-picked when it is ready.
> > > > > The release branch is cut to avoid any risk of regression that
> > > > > can be introduced by new work being merged to develop.
> > > > >
> > > > > Do you mean GEODE-6369?
> > > > >
> > > > > On Tue, Feb 19, 2019 at 4:50 PM Jason Huynh 
> > wrote:
> > > > >
> > > > > > Correction, GEODE-6359 and GEODE-6404.
> > > > > >
> > > > > > On Tue, Feb 19, 2019 at 4:49 PM Jason Huynh 
> > > wrote:
> > > > > >
> > > > > > > I still haven't gotten GEODE-6404 in... I assumed that the
> > tickets
> > > > from
> > > > > > > the last thread were going to make it into this release?
> > > > > > >
> > > > > > > Also I think GEODE-6539 should be fixed, it looks like an NPE
> > that
> > > > > occurs
> > > > > > > when we process leave requests.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda <
> > > > > > sai.boorlaga...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> My earlier release branch has created as 'release/1.9' without
> > the
> > > > > patch
> > > > > > >> number in semver.
> > > > > > >> So I have re-created a new release branch 'release/1.9.0'.
> > > > > > >>
> > > > > > >> I will go ahead delete the unwanted branch 'release/1.9'
> > > > > > >>
> > > > > > >> Sai
> > > > > > >> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda <
> > > > > > >> sai.boorlaga...@gmail.com>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >> > Hello Everyone,
> > > > > > >> >
> > > > > > >> > As discussed in my earlier email I have created a new
> release
> > > > branch
> > > > > > >> for Apache Geode 1.9.0 - "release/1.9"
> > > > > > >> >
> > > > > > >> > Please do review and raise any concern with the release
> > branch.
> > > > > > >> > If no concerns are raised, we will start with the voting for
> > the
> > > > > > >> release candidate soon.
> > > > > > >> >
> > > > > > >> > Regards
> > > > > > >> > Sai
> > > > > > >> >
> > > > > > >> >
> > > > > > >>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Release branch for Apache Geode 1.9.0 has been cut

2019-02-27 Thread Jason Huynh
Ok GEODE-6404 is merged into release/1.9.0 branch

commit 40b3d9c690257d3e0e9bccfa1677c8adcbdce096 (*HEAD -> **release/1.9.0*,
*origin/release/1.9.0*)

Author: Jason Huynh 

Date:   Mon Feb 25 18:31:21 2019 +


GEODE-6404: work around possible sync issue with computeIfAbsent (#3196)



* GEODE-6404: work around possible synchronization issue with
computeIfAbsent



  *  Tries to do the get() outside of a lock before computing

  *  Added jmh test

  *  Local benchmarking showed that although jdk 11 fixes the
contention issue that

   performing the get was faster than the retrieve mechanism of
computeIfAbsent



(cherry picked from commit 2be6375a775b6b0d00d0c41a1e2a3bf4b8745a46)


On Wed, Feb 27, 2019 at 10:15 AM Jason Huynh  wrote:

> Ok will do.  Also, I think Bruce is going to take a look at
> https://issues.apache.org/jira/browse/GEODE-6359.  I'll let you know when
> I merge to 1.9
>
> On Wed, Feb 27, 2019 at 9:19 AM Sai Boorlagadda 
> wrote:
>
>> Jason, You can go ahead and cherry pick onto the release branch.
>>
>> Sai
>>
>> On Wed, Feb 27, 2019 at 8:54 AM Jason Huynh  wrote:
>>
>> > Hi Sai,
>> >
>> > Fix for GEODE-6404 is now on develop
>> > (2be6375a775b6b0d00d0c41a1e2a3bf4b8745a46)
>> >  Would you be able to pull it into the 1.9 branch or would you like me
>> to?
>> >
>> > Thanks,
>> > -Jason
>> >
>> > On Thu, Feb 21, 2019 at 3:37 PM Sai Boorlagadda <
>> sai.boorlaga...@gmail.com
>> > >
>> > wrote:
>> >
>> > > GEODE-6359 is unassigned in JIRA. Who is working on it?
>> > >
>> > > On Wed, Feb 20, 2019 at 9:08 AM Jason Huynh 
>> wrote:
>> > >
>> > > > Oh ok I thought I read that voting was going to start soon, so I
>> > thought
>> > > > I'd raise a concern about the tickets not being fixed yet.
>> > > >
>> > > > I meant this ticket
>> https://issues.apache.org/jira/browse/GEODE-6359
>> > > This
>> > > > seems like a bad thing to have in the product.  It looks like a
>> > possible
>> > > > issue when processLeaveRequests.  I think the fix would be to just
>> copy
>> > > the
>> > > > list or not log the list of members.
>> > > >
>> > > >
>> > > >
>> > > > On Tue, Feb 19, 2019 at 5:35 PM Sai Boorlagadda <
>> > > sai.boorlaga...@gmail.com
>> > > > >
>> > > > wrote:
>> > > >
>> > > > > GEODE-6404 can be cherry-picked when it is ready.
>> > > > > The release branch is cut to avoid any risk of regression that
>> > > > > can be introduced by new work being merged to develop.
>> > > > >
>> > > > > Do you mean GEODE-6369?
>> > > > >
>> > > > > On Tue, Feb 19, 2019 at 4:50 PM Jason Huynh 
>> > wrote:
>> > > > >
>> > > > > > Correction, GEODE-6359 and GEODE-6404.
>> > > > > >
>> > > > > > On Tue, Feb 19, 2019 at 4:49 PM Jason Huynh 
>> > > wrote:
>> > > > > >
>> > > > > > > I still haven't gotten GEODE-6404 in... I assumed that the
>> > tickets
>> > > > from
>> > > > > > > the last thread were going to make it into this release?
>> > > > > > >
>> > > > > > > Also I think GEODE-6539 should be fixed, it looks like an NPE
>> > that
>> > > > > occurs
>> > > > > > > when we process leave requests.
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > On Tue, Feb 19, 2019 at 2:25 PM Sai Boorlagadda <
>> > > > > > sai.boorlaga...@gmail.com>
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > >> My earlier release branch has created as 'release/1.9'
>> without
>> > the
>> > > > > patch
>> > > > > > >> number in semver.
>> > > > > > >> So I have re-created a new release branch 'release/1.9.0'.
>> > > > > > >>
>> > > > > > >> I will go ahead delete the unwanted branch 'release/1.9'
>> > > > > > >>
>> > > > > > >> Sai
>> > > > > > >> On Tue, Feb 19, 2019 at 2:17 PM Sai Boorlagadda <
>> > > > > > >> sai.boorlaga...@gmail.com>
>> > > > > > >> wrote:
>> > > > > > >>
>> > > > > > >> > Hello Everyone,
>> > > > > > >> >
>> > > > > > >> > As discussed in my earlier email I have created a new
>> release
>> > > > branch
>> > > > > > >> for Apache Geode 1.9.0 - "release/1.9"
>> > > > > > >> >
>> > > > > > >> > Please do review and raise any concern with the release
>> > branch.
>> > > > > > >> > If no concerns are raised, we will start with the voting
>> for
>> > the
>> > > > > > >> release candidate soon.
>> > > > > > >> >
>> > > > > > >> > Regards
>> > > > > > >> > Sai
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: 1.9 release date

2019-03-01 Thread Jason Huynh
To Alexander's point, I'm use the latest geode snapshot and am seeing an
issue that looks similar to (if not the same as) GEODE-3780 (but this one
is closed).
I'd like to explore this a bit more and decide if that should be reopened
but I am not sure if it's not an issue important enough to wait for.

I think some soak time would be nice but I can understand that it's not a
clear criteria.

On Fri, Mar 1, 2019 at 3:57 PM Sai Boorlagadda 
wrote:

> I started working on LICENSE issues.
>
> On Fri, Mar 1, 2019 at 3:55 PM Anthony Baker  wrote:
>
> > I’ll point out that the license issue I mentioned earlier this week isn’t
> > resolved.  And that we’re bundling potentially incompatible Jackson jars.
> >
> > Anthony
> >
> >
> > > On Mar 1, 2019, at 3:41 PM, Alexander Murmann 
> > wrote:
> > >
> > > Clear quality metrics is definitely great. However, we've also seen in
> > the
> > > past that we sometimes find new issues by continue work on the code and
> > > some folks starting to use them on their own projects. For that
> reason, I
> > > think it might be wise to give ourselves some extra time to run into
> > issues
> > > organically. Maybe we don't need that as our coverage improves.
> > >
> > > On Fri, Mar 1, 2019 at 3:24 PM Owen Nichols 
> wrote:
> > >
> > >> The release criteria of “based on meeting quality goals” sounds great.
> > >>
> > >> What are those quality goals exactly, and can we objectively measure
> > >> progress against them?
> > >>
> > >> It looks like we already have a number of well-defined quality goals
> in
> > >> https://cwiki.apache.org/confluence/display/GEODE/Release+process <
> > >> https://cwiki.apache.org/confluence/display/GEODE/Release+process>
> > >> Presuming this is up-to-date, we need to satisfy 8 required quality
> > goals
> > >> before we can release.
> > >>
> > >> Thus far, we have not met the goal "Build is successful including
> > >> automated tests”.
> > >> To meet it, is one “all green" run in the release pipeline <
> > >>
> >
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-release-1-9-0-main?groups=complete
> > >
> > >> sufficient?  Or should we require 2 or 3 “all green” runs on the same
> > SHA?
> > >>
> > >> Do Windows tests count toward “all green”?  Currently they are not in
> > the
> > >> default view (same as 1.8.0).
> > >>
> > >> The Geode release process document above also lists an additional 11
> > >> quality goals as “optional.”  I assume these are meant as suggestions
> > the
> > >> community may wish to consider when voting on a release?
> > >>
> > >> If anyone feels the existing release process documentation does not
> > >> adequately define what quality goals must be met in order to release,
> > let’s
> > >> discuss (and get those docs updated!)
> > >>
> > >> -Owen
> > >>
> > >>> On Mar 1, 2019, at 8:02 AM, Anthony Baker  wrote:
> > >>>
> > >>> IMHO we start release work based on a quarterly schedule and we
> finish
> > >> it based on meeting quality goals.  So right now I’m less worried
> about
> > >> when the release will be done (because uncertainty) and more focused
> on
> > >> ensuring we have demonstrated stability on the release branch.
> > Hopefully
> > >> that will happen sooner than 4/1…but it could take longer too.
> > >>>
> > >>> Anthony
> > >>>
> > >>>
> >  On Feb 28, 2019, at 6:00 PM, Alexander Murmann  >
> > >> wrote:
> > 
> >  Hi everyone,
> > 
> >  According to our wiki we were aiming for a March 1st release date
> for
> > >> our
> >  1.9 release. We cut the release branch about two weeks late and see
> > >> unusual
> >  amounts of merges still going into the branch. I propose that we
> give
> >  ourselves some more time to validate what's there. My proposal is to
> > aim
> >  for last week of March or maybe even week of April 1st.
> > 
> >  What do you all think?
> > >>>
> > >>
> > >>
> > >
> > > --
> > > Alexander J. Murmann
> > > (650) 283-1933
> >
> >
>


Re: defunct branches

2019-04-17 Thread Jason Huynh
Hi Bruce,

I am unable to see the same branches on geode repo.  I do see these
branches on my personal fork but that's because I haven't updated my own
personal fork in some time...

Is there a chance that your origin is pointing to your personal fork and
not the Apache Geode Repo?

I am also unable to see these branches through the ui:
https://github.com/apache/geode/branches/all



On Wed, Apr 17, 2019 at 4:17 PM Bruce Schuchardt 
wrote:

> We have nearly 400 branches in the repo right now.  Most of them are for
> efforts that have been merged to develop long ago.  Don't forget to
> delete your branches when you're done with them.
>
>
>


Re: GEODE-6630 fix for release 1.9.0

2019-04-18 Thread Jason Huynh
+1

On Thu, Apr 18, 2019 at 1:48 PM Dan Smith  wrote:

> +1 - this looks like a good fix to get in if it was introduced in 1.9.0.
>
> -Dan
>
> On Thu, Apr 18, 2019 at 1:28 PM Eric Shu  wrote:
>
> > I'd like to include the fix for the NPE.
> > It is new in 1.9.
> >
> > Regards,
> > Eric
> >
>


Re: [PROPOSAL]: Improve OQL Method Invocation Security

2019-06-24 Thread Jason Huynh
+1

I have some concerns about all of the different ways we configure geode to
be secure, but that's a different issue ;-)
Overall, very thorough proposal Juan!



On Mon, Jun 24, 2019 at 4:22 PM Dan Smith  wrote:

> +1
>
> This proposal looks good to me!
>
> On Mon, Jun 24, 2019 at 4:15 PM Udo Kohlmeyer  wrote:
>
> > +1, Count me in
> >
> > On 6/24/19 13:06, Juan José Ramos wrote:
> > > Hey Jake,
> > >
> > > Sure, I guess we could do a live session if there's enough interest
> after
> > > people have reviewed the proposal.
> > > Best regards.
> > >
> > > On Mon, Jun 24, 2019 at 4:17 PM Jacob Barrett 
> > wrote:
> > >
> > >>
> > >>> On Jun 24, 2019, at 11:49 AM, Juan José Ramos 
> > wrote:
> > >>>
> > >>>   I’d rather get feedback in any way and aggregate everything on my
> own
> > >> than
> > >>> maybe not getting anything because I'm explicitly limiting the
> options
> > to
> > >>> provide it.
> > >> Dealers choice so both it is!
> > >>
> > >> Could you also consider public live session on some medium, like Zoom,
> > >> where you can walk through the proposal and take like feedback and
> > >> questions?
> > >>
> > >> Thanks,
> > >> Jake
> > >>
> > >>
> > >>
> >
>


Re: [PROPOSAL]: Improve OQL Method Invocation Security

2019-07-02 Thread Jason Huynh
Are security manager policies modifiable on the fly?  Just wondering if
someone decides they want to disallow or allow something, will they need to
restart their vms/geode node?

I think Dan pointed this out earlier in the thread, but just wanted to have
us consider the original cve that led to the heavy handed deny all method
invocations:

  CVE-2017-9795 Apache Geode OQL method invocation vulnerability

  Description:
  A malicious user with read access to specific regions within a Geode
  cluster may execute OQL queries that allow read and write access to
  objects within unauthorized regions.  In addition a user could invoke
  methods that allow remote code execution

I think Juan's proposal would still allow us to provide multiple solutions
that may or may not reopen that hole, but it would be up to the user to
decide what they are willing to accept.  The choice for what should be
default would still be up for debate...



On Tue, Jul 2, 2019 at 1:07 PM Jacob Barrett  wrote:

>
>
> > On Jul 2, 2019, at 11:58 AM, Juan José Ramos  wrote:
> >
> > Hello Jake,
> >
> > I've been doing some reading about the *Java Security Manager* and, even
> > when it might work for our use case, I don't think is a good fit due to
> the
> > following reasons:
> > 1). We already have chosen *Shiro* for authentication and authorization,
> > adding yet another security framework (and mapping between roles and
> > permissions between the two of them) for OQL method invocation security
> > seems overkilling to me. I'd rather spend some time improving the
> > *ResourcePermissionBasedMethodAuthorizer
>
> The security manager doesn’t have to be as fined grained as individual
> users. Do we really intend to support actions on objects base on current
> user too?
>
> > [1] *and/or adding *Permissions*/*Roles* to our current security
> framework
> > than introducing a new security framework altogether.
>
> Again, we don’t have to add anything. If the user wants to restrict access
> to certain activities they could add a policy file to the JVM to do so. It
> doesn’t have to be user based. We could simply have two hard coded roles,
> UserCode and SystemCode. SystemCode has AllPermissions, UserCode has none.
> Now we run all queries under the context of UserCode and it can’t execute
> any File, Runtime, Socket, etc.
>
> > 2). There can only be one *Security Manager* per JVM at a given time
> Eh, sorta but no, you can use thread contexts and class loader isolation,
> but there really doesn’t need to be more than one.
> .
> > 3). Customers already using a custom *Security Manager* on their own will
> > be in trouble... we can certainly wrote our own implementation as a
> > composite and parse multiple policy files, but this will probably break
> the
> > backward compatibility for these customers (and requires yet more
> > configuration and things to keep in mind when upgrading).
> Their custom SecurityManager would conform to the same rules as the
> default file based one in the JRE. They would be able to control code
> access using their existing framework. Seems like win to me.
>
> > 4). The current set of default *Permissions* (*PropertyPermission*,
> > *FilePermission*, etc.) don't apply to our use case, so we'll need to
> > create our own implementations and do some plumbing to map to what we
> > currently have in terms of principals and roles (more boilerplate).
>
> How do they not apply?
>
> > 5). In order to check a permission we basically need to check whether a
> > Security Manager is installed (*System.getSecurityManager() != null*) and
> > execute the check afterwards (*securityManager**.checkPermission(perm)*);
> > the *Security Manager* then looks at the classes of all methods currently
> > within the call stack (roughly speaking) and returns or throws an
> > exception... a lookup through a simple Map is probably faster, and easier
> > to maintain and read.
>
> All the things we need to be protected from have already been coded with
> these checks in the JRE. Do you think we need to protect other activities?
>
> -Jake
>
>


Re: SSL Alias Support for JMX Connections

2019-08-09 Thread Jason Huynh
+1

On Thu, Aug 8, 2019 at 7:12 PM Owen Nichols  wrote:

> Hi Juan and Sai, thank you for bringing your concern.
>
> Geode's release process dictates a time-based schedule <
> https://cwiki.apache.org/confluence/display/GEODE/Release+Schedule> to
> cut release branches.  The release/1.10.0 <
> https://github.com/apache/geode/tree/release/1.10.0> branch was already
> cut 1 week ago, but the “critical fixes” rule does allow critical fixes to
> be brought to the release branch by proposal on the dev list, as you have
> done here.
>
> If there is consensus from the Geode community that your proposed change
> satisfies the “critical fixes” rule, I will be happy to bring it to the
> 1.10.0 release branch.
>
> Regards
> - Owen
>
> > On Aug 8, 2019, at 6:53 PM, Sai Boorlagadda 
> wrote:
> >
> > +1 for getting this into 1.10
> >
> > On Thu, Aug 8, 2019 at 11:29 AM Juan José Ramos 
> wrote:
> >
> >> I'd like to propose including the fix for *GEODE-7022 [1]* in release
> >> 1.10.0.
> >> The fix basically improves our own implementation of the
> >> *RMIClientSocketFactory* to fully support the GEODE SSL settings,
> allowing
> >> our users to specify a default alias when opening an RMI connection.
> >> Best regards.
> >>
> >> [1]: https://issues.apache.org/jira/browse/GEODE-7022
> >>
> >> --
> >> Juan José Ramos Cassella
> >> Senior Software Engineer
> >> Email: jra...@pivotal.io
> >>
>
>


Re: [DISCUSS] Tweak to branch protection rules

2019-10-30 Thread Jason Huynh
+1, thanks Dan!

On Wed, Oct 30, 2019 at 10:07 AM Aaron Lindsey  wrote:

> +1
>
> - Aaron
>
>
> On Wed, Oct 30, 2019 at 8:02 AM Ju@N  wrote:
>
> > Perfect Naba, thanks for answering this.
> > My vote is +1 then.
> >
> > On Wed, Oct 30, 2019 at 2:37 PM Nabarun Nag  wrote:
> >
> > > The check box Dan is mentioning will just not invalidate any approved
> > > review if the code is changed.
> > > If a change is requested, the button will remain disabled.
> > >
> > > Regards
> > > Naba
> > >
> > >
> > > On Wed, Oct 30, 2019 at 6:27 AM Joris Melchior 
> > > wrote:
> > >
> > > > +1
> > > >
> > > > On Wed, Oct 30, 2019 at 5:27 AM Ju@N  wrote:
> > > >
> > > > > Question: this only applies for *approvals*, not for *refusals*,
> > > right?;
> > > > I
> > > > > mean, the *merge pull request* button will remain blocked if there
> > were
> > > > > some changes requested by reviewers and the author of the PR adds
> new
> > > > > commits (either addressing those requested changes or not)?.
> > > > > If the answer to the above is "yes", then +1.
> > > > >
> > > > > On Wed, Oct 30, 2019 at 1:44 AM Nabarun Nag 
> wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > On Tue, Oct 29, 2019 at 6:21 PM Darrel Schneider <
> > > > dschnei...@pivotal.io>
> > > > > > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > On Tue, Oct 29, 2019 at 6:08 PM Owen Nichols <
> > onich...@pivotal.io>
> > > > > > wrote:
> > > > > > >
> > > > > > > > +1 …this has already bitten me a few times
> > > > > > > >
> > > > > > > > > On Oct 29, 2019, at 6:01 PM, Dan Smith 
> > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > It seems we've configured our branch protection rules such
> > that
> > > > > > > pushing a
> > > > > > > > > change to a PR that has been approved invalidates the
> > previous
> > > > > > > approval.
> > > > > > > > >
> > > > > > > > > I think we should turn this off - it looks like it's an
> > > optional
> > > > > > > feature.
> > > > > > > > > We should trust people to rerequest reviews if needed.
> Right
> > > now
> > > > > this
> > > > > > > is
> > > > > > > > > adding busywork for people to reapprove minor changes
> (Fixing
> > > > merge
> > > > > > > > > conflicts, spotless, etc.)
> > > > > > > > >
> > > > > > > > > If you all agree I'll ask infra to uncheck "Dismiss stale
> > pull
> > > > > > request
> > > > > > > > > approvals when new commits are pushed." in our branch
> > > protection
> > > > > > rules.
> > > > > > > > >
> > > > > > > > > -Dan
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Ju@N
> > > > >
> > > >
> > > >
> > > > --
> > > > *Joris Melchior *
> > > > CF Engineering
> > > > Pivotal Toronto
> > > > 416 877 5427
> > > >
> > > > “Programs must be written for people to read, and only incidentally
> for
> > > > machines to execute.” – *Hal Abelson*
> > > > 
> > > >
> > >
> >
> >
> > --
> > Ju@N
> >
>


Re: [DISCUSS] is overriding a PR check ever justified?

2019-10-30 Thread Jason Huynh
If we are going to allow overrides, then maybe what Owen is describing
should occur.  Make a request on the dev list and explain the reasoning.

I don't think this has been done and a few have already been overridden.

Also who has the capability to override and knows how.  How is that
determined?

On Wed, Oct 30, 2019 at 1:59 PM Owen Nichols  wrote:

> > How do you override a check, anyway?
>
> Much like asking for jira permissions, wiki permissions, etc, just ask on
> the dev list ;)
>
> Presumably this type of request would be made as a “last resort” following
> a dev list discussion wherein all other reasonable options had been
> exhausted (reworking or splitting up the PR, pushing empty commits,
> rebasing the PR, etc)
>
> > On Oct 30, 2019, at 1:42 PM, Dan Smith  wrote:
> >
> > +1 for allowing overrides. I think we should avoid backing ourselves
> into a
> > corner where we can't get anything into develop without talking to apache
> > infra. Some infrastructure things we can't even fix without pushing a
> > change develop!
> >
> > How do you override a check, anyway?
> >
> > -Dan
> >
> > On Wed, Oct 30, 2019 at 12:58 PM Donal Evans  wrote:
> >
> >> -1 to overriding from me.
> >>
> >> The question I have here is what's the rush? Is anything ever so
> >> time-sensitive that you can't even wait the 15 minutes it takes for it
> to
> >> build and run unit tests? If some infrastructure problem is preventing
> >> builds or tests from completing then that should be fixed before any new
> >> changes are added, otherwise what's the point in even having the pre
> >> check-in process?
> >>
> >> -Donal
> >>
> >> On Wed, Oct 30, 2019 at 11:44 AM Nabarun Nag  wrote:
> >>
> >>> @Aaron
> >>> It's okay to wait for at least the build, and unit tests to complete,
> to
> >>> cover all the bases. [There may have been commits in between which may
> >>> result in failure because of the revert]  And it's not hard to get a PR
> >>> approval.
> >>>
> >>> -1 on overriding. If the infrastructure is down, which is the test
> >>> framework designed to ensure that we are not checking in unwanted
> changes
> >>> into Apache Geode, wait for the infrastructure to be up, get your
> changes
> >>> verified, get the review from a fellow committer and then check-in your
> >>> changes.
> >>>
> >>> I still don't understand why will anyone not wait for unit tests and
> >> build
> >>> to be successful.
> >>>
> >>> Regards
> >>> Nabarun Nag
> >>>
> >>> On Wed, Oct 30, 2019 at 11:32 AM Aaron Lindsey 
> >>> wrote:
> >>>
>  One case when it might be acceptable to overrule a PR check is
> >> reverting
> >>> a
>  commit. Before the branch protection was enabled, a committer could
> >>> revert
>  a commit without a PR. Now that PRs are mandatory, we have to wait for
> >>> the
>  checks to run in order to revert a commit. Usually we are reverting a
>  commit because it's causing problems, so I think overruling the PR
> >> checks
>  may be acceptable in that case.
> 
>  - Aaron
> 
> 
>  On Wed, Oct 30, 2019 at 11:11 AM Owen Nichols 
> >>> wrote:
> 
> > Our new branch-protection rules can sometimes lead to unexpected
>  obstacles
> > when infrastructure issues impede the intended process.  Should we
>  discuss
> > such cases as they come up, and should overruling the result of a PR
>  check
> > ever be an option on the table?
> >
> > -Owen
> 
> >>>
> >>
>
>


[vote/discuss]Override stressNewTest for Pull Request #4250?

2019-10-31 Thread Jason Huynh
Greetings,

We have a pull request (https://github.com/apache/geode/pull/4250) that is
running into a problem with stressNewTest.  Mostly the tests that are being
run are RollingUpgrade tests that take quite a bit of time to run the full
suite.  Because these tests are added/modified, the stressNewTest doesn't
have enough time to complete the run because it runs them N(50) number of
times.

However what has completed is 7400 tests and none of them have failed:
http://files.apachegeode-ci.info/builds/apache-develop-pr/geode-pr-4250/test-results/repeatTest/1572546653/

We would like to get this fix in before branching the next release, but are
unable to due to stressNewTest gating the merge button.  I know we have
another thread about overrides etc, and maybe this is a data point, but
this isn't meant to discuss that.

Would everyone be able to agree to allow someone to manually override and
merge this commit in (title of PR and reviews pending)?


Re: IncrementalBackupDistributedTest.testMissingMemberInBaseline hangs

2019-11-06 Thread Jason Huynh
I'm working on a fix and have a PR up for another hang in the same test
that I think fixes this issue.

https://github.com/apache/geode/pull/4255

On Wed, Nov 6, 2019 at 10:47 AM Kirk Lund  wrote:

> IncrementalBackupDistributedTest.testMissingMemberInBaseline is hanging
> intermittently in the DistributedTest job of CI and precheckin.
>
> I filed GEODE-7411 with all the involved thread stacks that I could find:
> https://issues.apache.org/jira/browse/GEODE-7411
>
> If anyone knows of any recent changes to backups, diskstore locking, or the
> locking of diskstores during cache close, please let Mark or I know.
>
> Thanks,
> Kirk
>


Re: Lucene upgrade

2019-11-06 Thread Jason Huynh
Hi Mario,

I think there are a few ways to accomplish what Dan was suggesting...Dan or
other's, please chime in with more options/solutions.

1.) We add some product code/lucene listener to detect whether we have old
versions of geode and if so, do not write to lucene on the newly updated
node until all versions are up to date.

2.)  We document it and provide instructions (and a way) to pause lucene
indexing before someone attempts to do a rolling upgrade.

I'd prefer option 1 or some other robust solution, because I think option 2
has many possible issues.


-Jason


On Wed, Nov 6, 2019 at 1:03 AM Mario Kevo  wrote:

> Hi Dan,
>
> thanks for suggestions.
> I didn't found a way to write lucene in older format. They only support
> reading old format indexes with newer version by using lucene-backward-
> codec.
>
> Regarding to freeze writes to the lucene index, that means that we need
> to start locators and servers, create lucene index on the server, roll
> it to current and then do puts. In this case tests passed. Is it ok?
>
>
> BR,
> Mario
>
>
> On Mon, 2019-11-04 at 17:07 -0800, Dan Smith wrote:
> > I think the issue probably has to do with doing a rolling upgrade
> > from an
> > old version of geode (with an old version of lucene) to the new
> > version of
> > geode.
> >
> > Geode's lucene integration works by writing the lucene index to a
> > colocated
> > region. So lucene index data that was generated on one server can be
> > replicated or rebalanced to other servers.
> >
> > I think what may be happening is that data written by a geode member
> > with a
> > newer version is being read by a geode member with an old version.
> > Because
> > this is a rolling upgrade test, members with multiple versions will
> > be
> > running as part of the same cluster.
> >
> > I think to really fix this rolling upgrade issue we would need to
> > somehow
> > configure the new version of lucene to write data in the old format,
> > at
> > least until the rolling upgrade is complete. I'm not sure if that is
> > possible with lucene or not - but perhaps? Another option might be to
> > freeze writes to the lucene index during the rolling upgrade process.
> > Lucene indexes are asynchronous, so this wouldn't necessarily require
> > blocking all puts. But it would require queueing up a lot of updates.
> >
> > -Dan
> >
> > On Mon, Nov 4, 2019 at 12:05 AM Mario Kevo 
> > wrote:
> >
> > > Hi geode dev,
> > >
> > > I'm working on upgrade lucene to a newer version. (
> > > https://issues.apache.org/jira/browse/GEODE-7309)
> > >
> > > I followed instruction from
> > >
> https://cwiki.apache.org/confluence/display/GEODE/Upgrading+to+Lucene+7.1.0
> > > Also add some other changes that is needed for lucene 8.2.0.
> > >
> > > I found some problems with tests:
> > >  * geode-
> > >lucene/src/test/java/org/apache/geode/cache/lucene/internal/dist
> > > ribu
> > >ted/DistributedScoringJUnitTest.java:
> > >
> > >
> > >  *
> > > geode-
> > > lucene/src/upgradeTest/java/org/apache/geode/cache/lucene/RollingUp
> > > gradeQueryReturnsCorrectResultsAfterClientAndServersAreRolledOver.j
> > > ava:
> > >  *
> > > geode-
> > > lucene/src/upgradeTest/java/org/apache/geode/cache/lucene/RollingUp
> > > gradeQueryReturnsCorrectResultAfterTwoLocatorsWithTwoServersAreRoll
> > > ed.java:
> > >  *
> > > ./geode-
> > > lucene/src/upgradeTest/java/org/apache/geode/cache/lucene/RollingUp
> > > gradeQueryReturnsCorrectResultsAfterServersRollOverOnPartitionRegio
> > > n.java:
> > >  *
> > > ./geode-
> > > lucene/src/upgradeTest/java/org/apache/geode/cache/lucene/RollingUp
> > > gradeQueryReturnsCorrectResultsAfterServersRollOverOnPersistentPart
> > > itionRegion.java:
> > >
> > >   -> failed due to
> > > Caused by: org.apache.lucene.index.IndexFormatTooOldException:
> > > Format
> > > version is not supported (resource
> > > BufferedChecksumIndexInput(segments_1)): 6 (needs to be between 7
> > > and
> > > 9). This version of Lucene only supports indexes created with
> > > release
> > > 6.0 and later.
> > > at
> > > org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.jav
> > > a:21
> > > 3)
> > > at
> > > org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:3
> > > 05)
> > > at
> > > org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:2
> > > 89)
> > > at
> > > org.apache.lucene.index.IndexWriter.(IndexWriter.java:846)
> > > at
> > > org.apache.geode.cache.lucene.internal.IndexRepositoryFactory.finis
> > > hCom
> > > putingRepository(IndexRepositoryFactory.java:123)
> > > at
> > > org.apache.geode.cache.lucene.internal.IndexRepositoryFactory.compu
> > > teIn
> > > dexRepository(IndexRepositoryFactory.java:66)
> > > at
> > > org.apache.geode.cache.lucene.internal.PartitionedRepositoryManager
> > > .com
> > > puteRepository(PartitionedRepositoryManager.java:151)
> > > at
> > > org.apache.geode.cache.lucene.internal.PartitionedRepositoryManager
> >

Re: Lucene upgrade

2019-11-06 Thread Jason Huynh
Jake, -from my understanding, the implementation details of geode-lucene is
that we are using a partitioned region as a "file-system" for lucene
files.  As new servers are rolled, the issue is that the new servers have
the new codec.  As puts occur on the users data region, the async listeners
are processing on new/old servers alike.  If a new server writes using the
new codec, it's written into the partitioned region but if an old server
with the old codec needs to read that file, it will blow up because it
doesn't know about the new codec.
Option 1 is to not have the new servers process/write if it detects
different geode systems (pre-codec changes).
Option 2 is similar but requires users to pause the aeq/lucene listeners

Deleting the indexes and recreating them can be quite expensive.  Mostly
due to tombstone creation when creating a new lucene index, but could be
considered Option 3.  It also would probably require
https://issues.apache.org/jira/browse/GEODE-3924 to be completed.

Gester - I may be wrong but I think option 1 is still doable.  We just need
to not write using the new codec until after all servers are upgraded.

There was also some upgrade challenge with scoring from what I remember,
but that's a different topic...


On Wed, Nov 6, 2019 at 1:00 PM Xiaojian Zhou  wrote:

> He tried to upgrade lucene version from current 6.6.4 to 8.2. There're some
> challenges. One challenge is the codec changed, which caused the format of
> index is also changed.
>
> That's why we did not implement it.
>
> If he resolved the coding challenges, then rolling upgrade will probably
> need option-2 to workaround it.
>
> Regards
> Gester
>
>
> On Wed, Nov 6, 2019 at 11:47 AM Jacob Barrett  wrote:
>
> > What about “versioning” the region that backs the indexes? Old servers
> > with old license would continue to read/write to old region. New servers
> > would start re-indexing with the new version. Given the async nature of
> the
> > indexing would the mismatch in indexing for some period of time have an
> > impact?
> >
> > Not an ideal solution but it’s something.
> >
> > In my previous life we just deleted the indexes and rebuilt them on
> > upgrade but that was specific to our application.
> >
> > -Jake
> >
> >
> > > On Nov 6, 2019, at 11:18 AM, Jason Huynh  wrote:
> > >
> > > Hi Mario,
> > >
> > > I think there are a few ways to accomplish what Dan was
> suggesting...Dan
> > or
> > > other's, please chime in with more options/solutions.
> > >
> > > 1.) We add some product code/lucene listener to detect whether we have
> > old
> > > versions of geode and if so, do not write to lucene on the newly
> updated
> > > node until all versions are up to date.
> > >
> > > 2.)  We document it and provide instructions (and a way) to pause
> lucene
> > > indexing before someone attempts to do a rolling upgrade.
> > >
> > > I'd prefer option 1 or some other robust solution, because I think
> > option 2
> > > has many possible issues.
> > >
> > >
> > > -Jason
> > >
> > >
> > >> On Wed, Nov 6, 2019 at 1:03 AM Mario Kevo 
> wrote:
> > >>
> > >> Hi Dan,
> > >>
> > >> thanks for suggestions.
> > >> I didn't found a way to write lucene in older format. They only
> support
> > >> reading old format indexes with newer version by using
> lucene-backward-
> > >> codec.
> > >>
> > >> Regarding to freeze writes to the lucene index, that means that we
> need
> > >> to start locators and servers, create lucene index on the server, roll
> > >> it to current and then do puts. In this case tests passed. Is it ok?
> > >>
> > >>
> > >> BR,
> > >> Mario
> > >>
> > >>
> > >>> On Mon, 2019-11-04 at 17:07 -0800, Dan Smith wrote:
> > >>> I think the issue probably has to do with doing a rolling upgrade
> > >>> from an
> > >>> old version of geode (with an old version of lucene) to the new
> > >>> version of
> > >>> geode.
> > >>>
> > >>> Geode's lucene integration works by writing the lucene index to a
> > >>> colocated
> > >>> region. So lucene index data that was generated on one server can be
> > >>> replicated or rebalanced to other servers.
> > >>>
> > >>> I think what may be happening is that data written by 

Re: Lucene upgrade

2019-11-06 Thread Jason Huynh
Dan - LGTM check it in! ;-) (kidding of course)

Jake - there is a side effect to this in that the user would have to
reimport all their data into the user defined region too.  Client apps
would also have to know which of the regions to put into.. also, I may be
misunderstanding this suggestion, completely.  In either case, I'll support
whoever implements the changes :-P


On Wed, Nov 6, 2019 at 2:53 PM Jacob Barrett  wrote:

>
>
> > On Nov 6, 2019, at 2:16 PM, Jason Huynh  wrote:
> >
> > Jake, -from my understanding, the implementation details of geode-lucene
> is
> > that we are using a partitioned region as a "file-system" for lucene
> > files.
>
> Yeah, I didn’t explain well. I mean to say literally create a new region
> for the new version of lucene and effectively start over. Yes this is
> expensive but its also functional. So new members would create region
> `lucene-whatever-v8` and start over there. Then when all nodes are upgraded
> the old `lucent-whatever` region could be deleted.
>
> Just tossing out alternatives to what’s already been posed.
>
> -Jake
>
>


Re: Lucene upgrade

2019-11-07 Thread Jason Huynh
Gester, I don't think we need to write in the old format, we just need the
new format not to be written while old members can potentially read the
lucene files.  Option 1 can be very similar to Dan's snippet of code.

I think Option 2 is going to leave a lot of people unhappy when they get
stuck with what Mario is experiencing right now and all we can say is "you
should have read the doc". Not to say Option 2 isn't valid and it's
definitely the least amount of work to do, I still vote option 1.

On Wed, Nov 6, 2019 at 5:16 PM Xiaojian Zhou  wrote:

> Usually re-creating region and index are expensive and customers are
> reluctant to do it, according to my memory.
>
> We do have an offline reindex scripts or steps (written by Barry?). If that
> could be an option, they can try that offline tool.
>
> I saw from Mario's email, he said: "I didn't found a way to write lucene in
> older format. They only support
> reading old format indexes with newer version by using lucene-backward-
> codec."
>
> That's why I think option-1 is not feasible.
>
> Option-2 will cause the queue to be filled. But usually customer will hold
> on, silence or reduce their business throughput when
> doing rolling upgrade. I wonder if it's a reasonable assumption.
>
> Overall, after compared all the 3 options, I still think option-2 is the
> best bet.
>
> Regards
> Gester
>
>
> On Wed, Nov 6, 2019 at 3:38 PM Jacob Barrett  wrote:
>
> >
> >
> > > On Nov 6, 2019, at 3:36 PM, Jason Huynh  wrote:
> > >
> > > Jake - there is a side effect to this in that the user would have to
> > > reimport all their data into the user defined region too.  Client apps
> > > would also have to know which of the regions to put into.. also, I may
> be
> > > misunderstanding this suggestion, completely.  In either case, I'll
> > support
> > > whoever implements the changes :-P
> >
> > Ah… there isn’t a way to re-index the existing data. Eh… just a thought.
> >
> > -Jake
> >
> >
>


Re: [DISCUSS] - Upgrading from Spring 4 to Spring 5

2019-11-07 Thread Jason Huynh
+1

On Thu, Nov 7, 2019 at 1:28 PM Dan Smith  wrote:

> +1
>
> On Thu, Nov 7, 2019 at 12:49 PM Jens Deppe  wrote:
>
> > +1
> >
> > On Wed, Oct 30, 2019 at 1:39 PM Udo Kohlmeyer  wrote:
> >
> > > Sorry,
> > >
> > > To clarify... When we change the Spring version we would be looking at
> > > looking to use the latest version and it's associated BOM.
> > >
> > > That might be inclusive of other Spring project upgrades.
> > >
> > > --Udo
> > >
> > > On 10/30/19 1:35 PM, Nabarun Nag wrote:
> > > > Hi Udo,
> > > > Maven has the latest as 5.2.0.RELEASE as the latest version.  In the
> > > > Dependency.groovy file, we have been putting the full version number.
> > > Hence
> > > > I am guessing you are suggesting we put 5.2.0.RELEASE?
> > > >
> > > > What about the status of the following dependencies?
> > > >
> > > > 'org.springframework.hateoas', name: 'spring-hateoas', version:
> > > > '0.25.0.RELEASE'
> > > > 'org.springframework.ldap', name: 'spring-ldap-core', version:
> > > > '2.3.2.RELEASE'
> > > > 'org.springframework.shell', name: 'spring-shell', version:
> > > '1.2.0.RELEASE'
> > > >
> > > > Regards
> > > > Naba
> > > >
> > >
> >
>


Re: Propose adding GEODE-7400 fix to 1.11 release

2019-11-11 Thread Jason Huynh
+1

On Mon, Nov 11, 2019 at 9:41 AM Kirk Lund  wrote:

> I propose merging the fix for GEODE-7400 (merged to develop today) to the
> 1.11 release branch.
>
> My fix for GEODE-7330 (merged to develop in late October) introduced
> GEODE-7400 which is the potential for RejectedExecutionException to be
> thrown within FederatingManager.
>
> Thanks,
> Kirk
>
> commit 3c5a6ccf40b03c345f53f28214513a9d76a1e024
> Author: Aaron Lindsey 
> Date:   Mon Nov 11 09:36:24 2019 -0800
>
> GEODE-7400: Prevent RejectedExecutionException in FederatingManager
> (#4270)
>
> Commit f0c96db73263bb1b3cb04558f2a720d70f43421f changed the
> FederatingManager class so that it reuses the same ExecutorService
> between restarts. After that change, if we start the manager after
> previously starting and stopping it, we get RejectedExecutionException
> because it tries to invoke a task on the same ExecutorService which has
> been shut down.
>
> This commit changes the FederatingManager so that it invokes a supplier
> to get a new ExecutorService each time it is started to prevent the
> RejectedExecutionException.
>
> Co-authored-by: Aaron Lindsey 
> Co-authored-by: Kirk Lund 
>


Re: Quick turnaround needed, feedback on this DRAFT Nov board report

2019-11-13 Thread Jason Huynh
+1

On Wed, Nov 13, 2019 at 10:06 AM Udo Kohlmeyer 
wrote:

> +1
>
> Haven't double checked the numbers, but the rest LGTM
>
> On 11/13/19 9:49 AM, Karen Miller wrote:
> > Draft board report for November 2019.  Submitting in 2 hours!  Quick
> > feedback, please!
> >
> > ## Description:
> > The mission of Apache Geode is the creation and maintenance of software
> > related
> > to a data management platform that provides real-time, consistent access
> to
> > data-intensive applications throughout widely distributed cloud
> > architectures.
> >
> > ## Issues:
> > There are no issues requiring board attention at this time.
> >
> > ## Membership Data:
> > Apache Geode was founded 2016-11-15 (3 years ago)
> > There are currently 104 committers and 52 PMC members in this project.
> > The Committer-to-PMC ratio is 2:1.
> >
> > Community changes, past quarter:
> > - Bill Burcham was added to the PMC on 2019-09-08
> > - Mario Ivanac was added to the PMC on 2019-09-08
> > - Bill Burcham was added as committer on 2019-09-09
> > - Mario Ivanac was added as committer on 2019-09-09
> >
> > ## Project Activity:
> > - Released Apache Geode 1.10.0 on 2019-09-26.
> > - Released Apache Geode 1.9.1 on 2019-09-06.
> >
> >
> > ## Community Health:
> > The community is actively contributing to the Apache Geode code base. In
> the
> > past quarter:
> > - 61 code contributors
> > - 347 issues opened in JIRA
> > - 275 issues closed in JIRA
> > - 434 PRs opened on GitHub
> > - 426 PRs closed on GitHub
> >
> > Enjoyed great attendance at the Apache Geode Summit held October 7,
> 2019, in
> > Austin, Texas.
> >
>


Re: Odg: gateway sender queue

2019-11-14 Thread Jason Huynh
+1 to Dan's suggestion

On Thu, Nov 14, 2019 at 9:52 AM Dan Smith  wrote:

> I'm ok with adding a --cleanQueue option.
>
> However, I don't think it should default to be true, since that would be
> changing the behavior of the existing command. It should default to false.
>
> -Dan
>
> On Thu, Nov 14, 2019 at 9:28 AM Xiaojian Zhou  wrote:
>
> > The --cleanQueue option is a similar idea as Barry's "DeadLetter" spike.
> I
> > remembered that we decided not to do it.
> >
> >
> > On Wed, Nov 13, 2019 at 11:41 PM Mario Ivanac 
> > wrote:
> >
> > > Hi,
> > >
> > > just to remind you on last question:
> > >
> > > what is your opinion on adding additional option in gfsh command
> "start
> > > gateway sender"
> > > to control clearing of existing queues --cleanQueues.
> > >
> > > This option will indicate, when gateway sender is started, should we
> > > discard/clean existing queue, or should we use existing queue.
> > > By default it will be to discard/clean existing queue.
> > >
> > > Best Regards,
> > > Mario
> > > 
> > > Šalje: Mario Ivanac 
> > > Poslano: 8. studenog 2019. 13:00
> > > Prima: dev@geode.apache.org 
> > > Predmet: Odg: gateway sender queue
> > >
> > > Hi all,
> > >
> > > one more clarification regarding 3rd question:
> > >
> > > "*   Could we add extra option in gfsh command  "start gateway sender"
> > >  that allows to control queues reset (for instance --cleanQueues)"
> > >
> > > This option will indicate, when gateway sender is started, should we
> > > discard/clean existing queue, or should we use existing queue.
> > > By default it will be to discard/clean existing queue.
> > >
> > > Best Regards,
> > > Mario
> > > 
> > > Šalje: Mario Ivanac 
> > > Poslano: 7. studenog 2019. 9:01
> > > Prima: Dan Smith ; dev@geode.apache.org <
> > > dev@geode.apache.org>
> > > Predmet: Odg: gateway sender queue
> > >
> > > Hi,
> > >
> > > thanks for answers.
> > >
> > > Some more details regarding 1st question.
> > >
> > > Is this behavior same (for serial and parallel gateway sender) in case
> > > queue is persistent?
> > > Meaning, should queue (persistent) be purged if we restart gateway
> > sender?
> > >
> > >
> > > Thanks,
> > > Mario
> > >
> > > 
> > > Šalje: Dan Smith 
> > > Poslano: 5. studenog 2019. 18:52
> > > Prima: dev@geode.apache.org 
> > > Predmet: Re: gateway sender queue
> > >
> > > Some replies, inline:
> > >
> > >   *   During testing we have observed, different behavior in parallel
> and
> > > > serial gateway senders. In case we manually stop, than start gateway
> > > > senders, for parallel gateway senders, queue is purged, but for
> serial
> > > > gateway senders this is not the case. Is this normal behavior or bug?
> > > >
> > >
> > > Hmm, I also think stop is supposed to clear the queue. I think if you
> are
> > > seeing that it doesn't clear the queue, that might be a bug.
> > >
> > >
> > >
> > > >   *   What happens with the queues when whole cluster is stopped and
> > > later
> > > > started (In our tests with persistent queues, the events are kept)?
> > > >
> > >
> > > Persistent queues will keep all of the events when you restart.
> > >
> > >
> > > >   *   Could we add extra option in gfsh command  "start gateway
> sender"
> > > > that allows to control queues reset (for instance --cleanQueues)?
> > > >
> > >
> > > If stop does clear the queue, would this be needed? It might still be
> > > reasonable - I've heard folks request a way to clear running queues as
> > > well.
> > >
> > > -Dan
> > >
> >
>


Re: [DISCUSS] is overriding a PR check ever justified?

2019-11-22 Thread Jason Huynh
@Udo - I think Naba was asking why the original commit that broke the
pipeline wasn't detected.

I think instead of a vote email, an email alerting the dev list that an
override needs to take place is still good to have.  If nothing else, to
identify areas that we might be able to improve with additional coverage or
checks.




On Fri, Nov 22, 2019 at 12:40 PM Udo Kohlmeyer 
wrote:

> @Naba.. wrong thread :)
>
> We have real scenario here now, where we have no consensus on whether we
> are allowed or not allowed to override.. Do we vote now? OR do we apply
> common sense?
>
> TBH, at this junction we should really just do whatever we believe is
> correct. A committer is appointed due to trust, so we should trust that
> our committers will do the right thing.
>
> But the same trust that our committers would always do the right thing
> has gotten us to this point where we don't trust
>
> MUCH bigger chicken-and-egg problem.
>
> I motion that we vote on this. I would also like to request all those
> AGAINST the override to provide strategies for us to not shoot-ourselves
> in the foot. (like Dan said)
>
> --Udo
>
> On 11/22/19 12:30 PM, Nabarun Nag wrote:
> > Just out of curiosity, why did the PR checks for GEODE-7488 not fail and
> > allowed it be merged? Is something lacking in our testing?
> >
> > On Fri, Nov 22, 2019 at 12:19 PM Dan Smith  wrote:
> >
> >> On Fri, Nov 22, 2019 at 11:56 AM Owen Nichols 
> wrote:
> >>
> >>> Tallying the votes from this thread, it looks like the majority vote is
> >> to
> >>> NEVER allow override even in extreme circumstance.
> >>>
> >> I think a better way of summarizing this thread so far is that there
> isn't
> >> really a consensus on this point, opinions seem to be fairly split. This
> >> wasn't a vote, and not everybody who expressed an opinion put a number
> next
> >> to their opinion or was directly aligned with the statement above.
> >>
> >> Maybe folks who think there should not be an override option could
> propose
> >> a specific process for dealing with issues like what Robert just did and
> >> try to bring the rest of us on board with that?
> >>
> >> -Dan
> >>
>


Re: [DISCUSS] - Move gfsh into its own submodule

2019-11-22 Thread Jason Huynh
+1
I think we are now at +114 thanks to jinmei's 100 ;-)


On Fri, Nov 22, 2019 at 1:50 PM Mark Bretl  wrote:

> +1
>
> On Fri, Nov 22, 2019 at 12:55 PM Nabarun Nag  wrote:
>
> > +1
> >
> > On Fri, Nov 22, 2019 at 12:51 PM Charlie Black 
> wrote:
> >
> > > this proposal == awesome sauce
> > >
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 11:24 AM Robert Houghton  >
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Do we want to restart from my lazy POC from a few months ago?
> > > >
> > > > On Fri, Nov 22, 2019, 08:40 Jens Deppe  wrote:
> > > >
> > > > > Hello All,
> > > > >
> > > > > We'd like to propose moving gfsh and all associated commands into
> its
> > > own
> > > > > gradle submodule (implicitly thus also producing a separate maven
> > > > > artifact). The underlying intent is to decouple geode core from any
> > > > Spring
> > > > > dependencies.
> > > > >
> > > > > The proposal is outlined here:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/GEODE/Move+gfsh+code+to+a+separate+gradle+sub-project
> > > > >
> > > > > Please provide feedback for this proposal *on this email thread*
> and
> > > not
> > > > in
> > > > > the comment section of the proposal page.
> > > > >
> > > > > The deadline for this proposal will be Monday, December 2.
> > > > >
> > > > > Thanks in advance for feedback / comments.
> > > > >
> > > > > --Jens & Patrick
> > > > >
> > > >
> > >
> > >
> > > --
> > > Charlie Black | cbl...@pivotal.io
> > >
> >
>


Re: [DISCUSS] is overriding a PR check ever justified?

2019-11-22 Thread Jason Huynh
@Udo- I believe that just noticing how often we have to override, might
help influence what the correct decision or better yet, solution, might
be.  Not necessarily needing a vote email but just an email that describes
why we needed to override.  I think it will help us get a better
understanding of when we have had to override and might show us a trend
over time on what issues or areas we may better coverage in.  Personally
I'd prefer if we'd have to override less often and in the open when we do.

On Fri, Nov 22, 2019 at 2:02 PM Udo Kohlmeyer  wrote:

> @Jason, thank you for clarification.. I just pointed out to @Naba that
> it was the wrong thread. As, like in many cases, the original intent of
> the thread is lost because someone has asked a question, not directly
> relating to the thread and then the whole thread is derailed.
>
> When I asked for a vote, I was asking should we be voting on whether we
> should allow overrides. It was inconclusive, with 4 against and 3 for
> overrides. Which really leaves us in a position where some feel we
> should allow the "break glass emergency" override of branch protection
> and some don't, and the rest of the 90+ committers who don't care.
>
> Are you suggesting that every override now becomes a vote on the dev
> list? Given that we don't have a real stance on whether we allow it or
> not, maybe that is best UNTIL we hit a scenario where we cannot get
> consensus on the override.
>
> --Udo
>
> On 11/22/19 1:06 PM, Jason Huynh wrote:
> > @Udo - I think Naba was asking why the original commit that broke the
> > pipeline wasn't detected.
> >
> > I think instead of a vote email, an email alerting the dev list that an
> > override needs to take place is still good to have.  If nothing else, to
> > identify areas that we might be able to improve with additional coverage
> or
> > checks.
> >
> >
> >
> >
> > On Fri, Nov 22, 2019 at 12:40 PM Udo Kohlmeyer 
> > wrote:
> >
> >> @Naba.. wrong thread :)
> >>
> >> We have real scenario here now, where we have no consensus on whether we
> >> are allowed or not allowed to override.. Do we vote now? OR do we apply
> >> common sense?
> >>
> >> TBH, at this junction we should really just do whatever we believe is
> >> correct. A committer is appointed due to trust, so we should trust that
> >> our committers will do the right thing.
> >>
> >> But the same trust that our committers would always do the right thing
> >> has gotten us to this point where we don't trust
> >>
> >> MUCH bigger chicken-and-egg problem.
> >>
> >> I motion that we vote on this. I would also like to request all those
> >> AGAINST the override to provide strategies for us to not shoot-ourselves
> >> in the foot. (like Dan said)
> >>
> >> --Udo
> >>
> >> On 11/22/19 12:30 PM, Nabarun Nag wrote:
> >>> Just out of curiosity, why did the PR checks for GEODE-7488 not fail
> and
> >>> allowed it be merged? Is something lacking in our testing?
> >>>
> >>> On Fri, Nov 22, 2019 at 12:19 PM Dan Smith  wrote:
> >>>
> >>>> On Fri, Nov 22, 2019 at 11:56 AM Owen Nichols 
> >> wrote:
> >>>>> Tallying the votes from this thread, it looks like the majority vote
> is
> >>>> to
> >>>>> NEVER allow override even in extreme circumstance.
> >>>>>
> >>>> I think a better way of summarizing this thread so far is that there
> >> isn't
> >>>> really a consensus on this point, opinions seem to be fairly split.
> This
> >>>> wasn't a vote, and not everybody who expressed an opinion put a number
> >> next
> >>>> to their opinion or was directly aligned with the statement above.
> >>>>
> >>>> Maybe folks who think there should not be an override option could
> >> propose
> >>>> a specific process for dealing with issues like what Robert just did
> and
> >>>> try to bring the rest of us on board with that?
> >>>>
> >>>> -Dan
> >>>>
>


Re: [DISCUSS/VOTE] Proposal to bring GEODE-7465 to release/1.11.0

2019-11-26 Thread Jason Huynh
+1

On Tue, Nov 26, 2019 at 11:34 AM Anilkumar Gingade 
wrote:

> +1
>
> On Tue, Nov 26, 2019 at 11:32 AM Udo Kohlmeyer  wrote:
>
> > This is no-brainer
> >
> > *+1*
> >
> > On 11/26/19 11:27 AM, Owen Nichols wrote:
> > > I would like to propose bringing “GEODE-7465: Set eventProcessor to
> null
> > in serial AEQ when it is stopped” into the 1.11 release (necessitating an
> > RC4).
> > >
> > > Without the fix, a sequence of ordinary gfsh commands will leave the
> WAN
> > gateway in an unrecoverable hung state:
> > > stop gateway-sender
> > > start gateway-sender
> > > The only recourse is to restart the server.
> > >
> > > This fix is critical because the distributed system fails to sync data
> > between WAN sites as the user would expect.
> > > This issue did exist in previous releases, but recent enhancements to
> > WAN/AEQ such as AEQ-pause are increasing user interaction with
> WAN-related
> > gfsh commands.
> > >
> > > The fix is simple, low risk, tested, and has been on develop for 5
> days:
> > >
> >
> https://github.com/apache/geode/commit/e148cef9cb63eba283cf86bc490eb280023567ce
> >
>


Re: IndexType deprecation question

2019-12-02 Thread Jason Huynh
Hi Joris,

How are you creating the index?  If using the QueryService java api, there
should be createKeyIndex() and createIndex() methods.  These methods should
create the primary key index and the functional index.

I am not sure if there is an alternative in gfsh... it might still be using
the IndexType enum or something similar.




On Fri, Nov 29, 2019 at 12:18 PM Joris Melchior 
wrote:

> Thanks John.
>
> I'm trying to use it on the server side for the management API so
> unfortunately the Spring wrapper is not an option. Hopefully someone can
> provide some insight into the deprecation background once all the turkey
> and stuffing has been digested.
>
> On Fri, Nov 29, 2019 at 2:16 PM John Blum  wrote:
>
> > FYI... if you are using *Spring Data for Apache Geode* (SDG;
> > spring-data-geode), then there is an SDG Index enum type
> > <
> >
> https://docs.spring.io/spring-data/geode/docs/current/api/org/springframework/data/gemfire/IndexType.html
> > >
> > [1]
> > wrapping the deprecated Apache Geode Index enum type
> > <
> >
> https://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/query/IndexType.html
> > >
> >  [2].
> >
> >
> >
> > [1]
> >
> >
> https://docs.spring.io/spring-data/geode/docs/current/api/org/springframework/data/gemfire/IndexType.html
> > [2]
> >
> >
> https://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/query/IndexType.html
> >
> >
> > On Fri, Nov 29, 2019 at 8:17 AM Joris Melchior 
> > wrote:
> >
> > > Hi All,
> > >
> > > I notice that the ENUM
> > >
> > > org.apache.geode.cache.query.IndexType has been deprecated but can't
> > > find what to use instead of this ENUM if I wanted to use a
> > > non-deprecated alternative.
> > >
> > > I understand that HASH indexes are no longer recommended but the other
> > > types (PRIMARY_KEY, FUNCTIONAL) are still valid and I believe we
> > > should be able to use them without using deprecated code.
> > >
> > > Can anyone tell me how this is accomplished?
> > >
> > > Thanks, Joris.
> > >
> > >
> > > --
> > > *Joris Melchior *
> > > CF Engineering
> > > Pivotal Toronto
> > > 416 877 5427
> > >
> > > “Programs must be written for people to read, and only incidentally for
> > > machines to execute.” – *Hal Abelson*
> > > 
> > >
> >
> >
> > --
> > -John
> > john.blum10101 (skype)
> >
>
>
> --
> *Joris Melchior *
> CF Engineering
> Pivotal Toronto
> 416 877 5427
>
> “Programs must be written for people to read, and only incidentally for
> machines to execute.” – *Hal Abelson*
> 
>


Re: Odg: Lucene upgrade

2019-12-02 Thread Jason Huynh
Hi Mario,

Sorry I reread the original email and see that the exception points to a
different problem.. I think your fix addresses an old version seeing an
unknown new lucene format, which looks good.  The following exception looks
like it's the new lucene library not being able to read the older files
(Just a guess from the message)...

Caused by: org.apache.lucene.index.IndexFormatTooOldException: Format
version is not supported (resource
BufferedChecksumIndexInput(segments_1)): 6 (needs to be between 7 and
9). This version of Lucene only supports indexes created with release
6.0 and later.

The upgrade is from 6.6.2 -> 8.x though, so I am not sure if the message is
incorrect (stating needs to be release 6.0 and later) or if it requires an
intermediate upgrade between 6.6.2 -> 7.x -> 8.





On Mon, Dec 2, 2019 at 2:00 AM Mario Kevo  wrote:

>
> I started with implementation of Option-1.
> As I understood the idea is to block all puts(put them in the queue) until
> all members are upgraded. After that it will process all queued events.
>
> I tried with Dan's proposal to check on start of
> LuceneEventListener.process() if all members are upgraded, also changed
> test to verify lucene indexes only after all members are upgraded, but got
> the same error with incompatibilities between lucene versions.
> Changes are visible on https://github.com/apache/geode/pull/4198.
>
> Please add comments and suggestions.
>
> BR,
> Mario
>
>
> 
> Šalje: Xiaojian Zhou 
> Poslano: 7. studenog 2019. 18:27
> Prima: geode 
> Predmet: Re: Lucene upgrade
>
> Oh, I misunderstood option-1 and option-2. What I vote is Jason's option-1.
>
> On Thu, Nov 7, 2019 at 9:19 AM Jason Huynh  wrote:
>
> > Gester, I don't think we need to write in the old format, we just need
> the
> > new format not to be written while old members can potentially read the
> > lucene files.  Option 1 can be very similar to Dan's snippet of code.
> >
> > I think Option 2 is going to leave a lot of people unhappy when they get
> > stuck with what Mario is experiencing right now and all we can say is
> "you
> > should have read the doc". Not to say Option 2 isn't valid and it's
> > definitely the least amount of work to do, I still vote option 1.
> >
> > On Wed, Nov 6, 2019 at 5:16 PM Xiaojian Zhou  wrote:
> >
> > > Usually re-creating region and index are expensive and customers are
> > > reluctant to do it, according to my memory.
> > >
> > > We do have an offline reindex scripts or steps (written by Barry?). If
> > that
> > > could be an option, they can try that offline tool.
> > >
> > > I saw from Mario's email, he said: "I didn't found a way to write
> lucene
> > in
> > > older format. They only support
> > > reading old format indexes with newer version by using lucene-backward-
> > > codec."
> > >
> > > That's why I think option-1 is not feasible.
> > >
> > > Option-2 will cause the queue to be filled. But usually customer will
> > hold
> > > on, silence or reduce their business throughput when
> > > doing rolling upgrade. I wonder if it's a reasonable assumption.
> > >
> > > Overall, after compared all the 3 options, I still think option-2 is
> the
> > > best bet.
> > >
> > > Regards
> > > Gester
> > >
> > >
> > > On Wed, Nov 6, 2019 at 3:38 PM Jacob Barrett 
> > wrote:
> > >
> > > >
> > > >
> > > > > On Nov 6, 2019, at 3:36 PM, Jason Huynh  wrote:
> > > > >
> > > > > Jake - there is a side effect to this in that the user would have
> to
> > > > > reimport all their data into the user defined region too.  Client
> > apps
> > > > > would also have to know which of the regions to put into.. also, I
> > may
> > > be
> > > > > misunderstanding this suggestion, completely.  In either case, I'll
> > > > support
> > > > > whoever implements the changes :-P
> > > >
> > > > Ah… there isn’t a way to re-index the existing data. Eh… just a
> > thought.
> > > >
> > > > -Jake
> > > >
> > > >
> > >
> >
>


Re: IndexType deprecation question

2019-12-02 Thread Jason Huynh
Hi Joris,

Just some guesses and no actual answer from me here:

The deprecation of the index type was before HASH indexes were created, and
my guess was due to the introduction of the "new at the time" query service
apis (the javadoc:@deprecated As of 6.6.1. Check {@link QueryService} for
changes.)

The internals still use the IndexType as you have pointed out and maybe the
deprecation was more intended for the end user (since it's not an internal
type) and perhaps it was intended to have been moved internal at some
point?

There is some weirdness with the index types and actually having multiple
implementations for the Functional type.  Under the covers we create a
memory-optimized version based on the indexed expression.

Things have changed since deprecation, so maybe it should be
un/de-deprecated or an alternative solution can probably be thought up...

-Jason


On Mon, Dec 2, 2019 at 9:13 AM Joris Melchior  wrote:

> Hi Jason,
>
> At this point it is not about creating but returning the Region with an
> indicator in the management API without using deprecated parts. Under the
> covers the QueryService java api still uses the IndexType ENUM and had
> assumed that an alternative would be provided when something is marked as
> deprecated.
>
> I can of course create a new ENUM to return but prefer not to take this
> step before ensuring that we don't have something in place already.
>
> I'm also wondering if the deprecation should have been limited to the HASH
> type on the IndexType ENUM instead of deprecating the complete ENUM.
>
> Thanks, Joris.
>
> On Mon, Dec 2, 2019 at 12:05 PM Jason Huynh  wrote:
>
> > Hi Joris,
> >
> > How are you creating the index?  If using the QueryService java api,
> there
> > should be createKeyIndex() and createIndex() methods.  These methods
> should
> > create the primary key index and the functional index.
> >
> > I am not sure if there is an alternative in gfsh... it might still be
> using
> > the IndexType enum or something similar.
> >
> >
> >
> >
> > On Fri, Nov 29, 2019 at 12:18 PM Joris Melchior 
> > wrote:
> >
> > > Thanks John.
> > >
> > > I'm trying to use it on the server side for the management API so
> > > unfortunately the Spring wrapper is not an option. Hopefully someone
> can
> > > provide some insight into the deprecation background once all the
> turkey
> > > and stuffing has been digested.
> > >
> > > On Fri, Nov 29, 2019 at 2:16 PM John Blum  wrote:
> > >
> > > > FYI... if you are using *Spring Data for Apache Geode* (SDG;
> > > > spring-data-geode), then there is an SDG Index enum type
> > > > <
> > > >
> > >
> >
> https://docs.spring.io/spring-data/geode/docs/current/api/org/springframework/data/gemfire/IndexType.html
> > > > >
> > > > [1]
> > > > wrapping the deprecated Apache Geode Index enum type
> > > > <
> > > >
> > >
> >
> https://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/query/IndexType.html
> > > > >
> > > >  [2].
> > > >
> > > >
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://docs.spring.io/spring-data/geode/docs/current/api/org/springframework/data/gemfire/IndexType.html
> > > > [2]
> > > >
> > > >
> > >
> >
> https://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/query/IndexType.html
> > > >
> > > >
> > > > On Fri, Nov 29, 2019 at 8:17 AM Joris Melchior  >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I notice that the ENUM
> > > > >
> > > > > org.apache.geode.cache.query.IndexType has been deprecated but
> can't
> > > > > find what to use instead of this ENUM if I wanted to use a
> > > > > non-deprecated alternative.
> > > > >
> > > > > I understand that HASH indexes are no longer recommended but the
> > other
> > > > > types (PRIMARY_KEY, FUNCTIONAL) are still valid and I believe we
> > > > > should be able to use them without using deprecated code.
> > > > >
> > > > > Can anyone tell me how this is accomplished?
> > > > >
> > > > > Thanks, Joris.
> > > > >
> > > > >
> > > > > --
> > > > > *Joris Melchior *
> > > > > CF Engineering
> > > > > Pivotal Toronto
> > > > > 416 877 5427
> > > > >
> > > > > “Programs must be written for people to read, and only incidentally
> > for
> > > > > machines to execute.” – *Hal Abelson*
> > > > > <https://en.wikipedia.org/wiki/Hal_Abelson>
> > > > >
> > > >
> > > >
> > > > --
> > > > -John
> > > > john.blum10101 (skype)
> > > >
> > >
> > >
> > > --
> > > *Joris Melchior *
> > > CF Engineering
> > > Pivotal Toronto
> > > 416 877 5427
> > >
> > > “Programs must be written for people to read, and only incidentally for
> > > machines to execute.” – *Hal Abelson*
> > > <https://en.wikipedia.org/wiki/Hal_Abelson>
> > >
> >
>
>
> --
> *Joris Melchior *
> CF Engineering
> Pivotal Toronto
> 416 877 5427
>
> “Programs must be written for people to read, and only incidentally for
> machines to execute.” – *Hal Abelson*
> <https://en.wikipedia.org/wiki/Hal_Abelson>
>


Re: Odg: Odg: Lucene upgrade

2019-12-06 Thread Jason Huynh
Hi Mario,

I made a PR against your branch for some of the changes I had to do to get
past the Index too new exception.  Summary - repo creation, even if no
writes occur, appear to create some meta data that the old node attempts to
read and blow up on.

The pr against your branch just prevents the repo from being constructed
until all old members are upgraded.
This requires test changes to not try to validate using queries (since we
prevent draining and repo creation, the query will just wait)

The reason why you probably were seeing unsuccessful dispatches, is because
we kind of intended for that with the oldMember check.  In-between the
server rolls, the test was trying to verify, but because not all servers
had upgraded, the LuceneEventListener wasn't allowing the queue to drain on
the new member.

I am not sure if the changes I added are acceptable or not -maybe if this
ends up working then we can discuss on the dev list.

There will probably be other "gotcha's" along the way...


On Fri, Dec 6, 2019 at 1:12 AM Mario Kevo  wrote:

> Hi Jason,
>
> I tried to upgrade from 6.6.2 to 7.1.0 and got the following exception:
>
> org.apache.lucene.index.IndexFormatTooNewException: Format version is not 
> supported (resource BufferedChecksumIndexInput(segments_2)): 7 (needs to be 
> between 4 and 6)
>
> It looks like the fix is not good.
>
> What I see (from
> *RollingUpgradeQueryReturnsCorrectResultsAfterServersRollOverOnPartitionRegion*
> *.java*) is when it doing upgrade of a *locator* it will shutdown and
> started on the newer version. The problem is that *server2* become a lead
> and cannot read lucene index on the newer version(Lucene index format has
> changed between 6 and 7 versions).
>
> Another problem is after the rolling upgrade of *locator* and *server1*
> when verifying region size on VMs. For example,
>
>
>
> *expectedRegionSize += 
> 5;putSerializableObjectAndVerifyLuceneQueryResult(server1, regionName, 
> expectedRegionSize, 5,15, server2, server3);*
>
> First it checks if region has expected size for VMs and it passed(has 15 
> entries). The problem is while executing verifyLuceneQueryResults, for 
> VM1(server2) it has 13 entries and assertion failed.
> From logs it can be seen that two batches are unsuccessfully dispatched:
>
>
> *[vm0] [warn 2019/12/06 08:31:39.956 CET  GatewaySender_AsyncEventQueue_index#_aRegion_0> tid=0x42] During normal 
> processing, unsuccessfully dispatched 1 events (batch #0)*
>
>
> *[vm0] [warn 2019/12/06 08:31:40.103 CET  GatewaySender_AsyncEventQueue_index#_aRegion_2> tid=0x46] During normal 
> processing, unsuccessfully dispatched 1 events (batch #0)*
> For VM0(server1) and VM2(server3) it has 14 entries, one is unsuccessfully 
> dispatched.
>
> I don't know why some events are successfully dispatched, some not.
> Do you have any idea?
>
> BR,
> Mario
>
>
> --
> *Šalje:* Jason Huynh 
> *Poslano:* 2. prosinca 2019. 18:32
> *Prima:* geode 
> *Predmet:* Re: Odg: Lucene upgrade
>
> Hi Mario,
>
> Sorry I reread the original email and see that the exception points to a
> different problem.. I think your fix addresses an old version seeing an
> unknown new lucene format, which looks good.  The following exception looks
> like it's the new lucene library not being able to read the older files
> (Just a guess from the message)...
>
> Caused by: org.apache.lucene.index.IndexFormatTooOldException: Format
> version is not supported (resource
> BufferedChecksumIndexInput(segments_1)): 6 (needs to be between 7 and
> 9). This version of Lucene only supports indexes created with release
> 6.0 and later.
>
> The upgrade is from 6.6.2 -> 8.x though, so I am not sure if the message is
> incorrect (stating needs to be release 6.0 and later) or if it requires an
> intermediate upgrade between 6.6.2 -> 7.x -> 8.
>
>
>
>
>
> On Mon, Dec 2, 2019 at 2:00 AM Mario Kevo  wrote:
>
> >
> > I started with implementation of Option-1.
> > As I understood the idea is to block all puts(put them in the queue)
> until
> > all members are upgraded. After that it will process all queued events.
> >
> > I tried with Dan's proposal to check on start of
> > LuceneEventListener.process() if all members are upgraded, also changed
> > test to verify lucene indexes only after all members are upgraded, but
> got
> > the same error with incompatibilities between lucene versions.
> > Changes are visible on https://github.com/apache/geode/pull/4198.
> >
> > Please add comments and suggestions.
> >
> > BR,
> > Mario
> >
> >
> > 
> > Šalje: Xiaoj

Request GEODE-7510/GEODE-7538 be cherry-picked into release 1.11

2019-12-11 Thread Jason Huynh
Hello,

GEODE-7538 was highlighted as blocking the 1.11 release.  This has now been
addressed and propose that this gets merged over to release 1.11.

This issue solves a few things, most notably: GEODE-7510 shows
inconsistency between secondaries and primaries.  GEODE-7538 showed
operations not consistently being applied.  The code change is a revert
that modified profile calculation.  It may affect other areas that were
showing up as flaky as it's timing related.

Thanks,
-Jason


Re: Request GEODE-7510/GEODE-7538 be cherry-picked into release 1.11

2019-12-11 Thread Jason Huynh
I believe we'll have to cherry-pick 1448c83c2a910b2891b4c13f1b4cbed2920252de
across.  Unfortunately it went in as a merge (there was a hiccup on squash
and merge where the try-again didn't do a squash merge :-(.




On Wed, Dec 11, 2019 at 11:30 AM Mark Hanson  wrote:

> Can I get the SHA of the commit?
>
> Thanks,
> Mark
>
> > On Dec 11, 2019, at 11:02 AM, Jason Huynh  wrote:
> >
> > Hello,
> >
> > GEODE-7538 was highlighted as blocking the 1.11 release.  This has now
> been
> > addressed and propose that this gets merged over to release 1.11.
> >
> > This issue solves a few things, most notably: GEODE-7510 shows
> > inconsistency between secondaries and primaries.  GEODE-7538 showed
> > operations not consistently being applied.  The code change is a revert
> > that modified profile calculation.  It may affect other areas that were
> > showing up as flaky as it's timing related.
> >
> > Thanks,
> > -Jason
>
>


Re: Odg: Odg: Odg: Lucene upgrade

2019-12-11 Thread Jason Huynh
Hi Mario,

Is the same test failing?  If it's a different test, could you tell us
which one?
If it's a rolling upgrade test, then we might have to mark this as expected
behavior and modify the tests to waitForFlush (wait until the queue is
drained).  As long as the test is able to roll all the servers and not get
stuck waiting for a queue to flush (which will only happen once all the
servers are rolled now).

If the test hasn't rolled all the servers and is trying to execute a query,
then we'd probably have to modify the test to not do the query in the
middle or expect that exception to occur.

Thanks,
-Jason

On Wed, Dec 11, 2019 at 6:43 AM Mario Kevo  wrote:

> Hi Jason,
>
> This change fix IndexFormatTooNewException, but now we have
>
>  org.apache.geode.cache.lucene.LuceneQueryException: Lucene Index is not 
> available, currently indexing
>
>
> So this means that query doesn't wait until all indexes are created.
> In * LuceneQueryFunction.java* it is set to not wait for repo 
> [*execute(context,
> false)*]. If we have a bigger queue(like in the test) it will failed as
> it will not wait until indexes are created. I also tried to put just few
> objects and it passed as it had enough time to create indexes.
> Do we need to change this part to wait for repo, or put a lower number of
> entries in tests?
>
> BR,
> Mario
>
>
>
> --
> *Šalje:* Jason Huynh 
> *Poslano:* 6. prosinca 2019. 20:53
> *Prima:* Mario Kevo 
> *Kopija:* geode 
> *Predmet:* Re: Odg: Odg: Lucene upgrade
>
> Hi Mario,
>
> I made a PR against your branch for some of the changes I had to do to get
> past the Index too new exception.  Summary - repo creation, even if no
> writes occur, appear to create some meta data that the old node attempts to
> read and blow up on.
>
> The pr against your branch just prevents the repo from being constructed
> until all old members are upgraded.
> This requires test changes to not try to validate using queries (since we
> prevent draining and repo creation, the query will just wait)
>
> The reason why you probably were seeing unsuccessful dispatches, is
> because we kind of intended for that with the oldMember check.  In-between
> the server rolls, the test was trying to verify, but because not all
> servers had upgraded, the LuceneEventListener wasn't allowing the queue to
> drain on the new member.
>
> I am not sure if the changes I added are acceptable or not -maybe if this
> ends up working then we can discuss on the dev list.
>
> There will probably be other "gotcha's" along the way...
>
>
> On Fri, Dec 6, 2019 at 1:12 AM Mario Kevo  wrote:
>
> Hi Jason,
>
> I tried to upgrade from 6.6.2 to 7.1.0 and got the following exception:
>
> org.apache.lucene.index.IndexFormatTooNewException: Format version is not 
> supported (resource BufferedChecksumIndexInput(segments_2)): 7 (needs to be 
> between 4 and 6)
>
> It looks like the fix is not good.
>
> What I see (from
> *RollingUpgradeQueryReturnsCorrectResultsAfterServersRollOverOnPartitionRegion*
> *.java*) is when it doing upgrade of a *locator* it will shutdown and
> started on the newer version. The problem is that *server2* become a lead
> and cannot read lucene index on the newer version(Lucene index format has
> changed between 6 and 7 versions).
>
> Another problem is after the rolling upgrade of *locator* and *server1*
> when verifying region size on VMs. For example,
>
>
>
> *expectedRegionSize += 
> 5;putSerializableObjectAndVerifyLuceneQueryResult(server1, regionName, 
> expectedRegionSize, 5,15, server2, server3);*
>
> First it checks if region has expected size for VMs and it passed(has 15 
> entries). The problem is while executing verifyLuceneQueryResults, for 
> VM1(server2) it has 13 entries and assertion failed.
> From logs it can be seen that two batches are unsuccessfully dispatched:
>
>
> *[vm0] [warn 2019/12/06 08:31:39.956 CET  GatewaySender_AsyncEventQueue_index#_aRegion_0> tid=0x42] During normal 
> processing, unsuccessfully dispatched 1 events (batch #0)*
>
>
> *[vm0] [warn 2019/12/06 08:31:40.103 CET  GatewaySender_AsyncEventQueue_index#_aRegion_2> tid=0x46] During normal 
> processing, unsuccessfully dispatched 1 events (batch #0)*
> For VM0(server1) and VM2(server3) it has 14 entries, one is unsuccessfully 
> dispatched.
>
> I don't know why some events are successfully dispatched, some not.
> Do you have any idea?
>
> BR,
> Mario
>
>
> --
> *Šalje:* Jason Huynh 
> *Poslano:* 2. prosinca 2019. 18:32
> *Prima:* geode 
> *Predmet:* Re: Odg: Lu

Re: Odg: Odg: Odg: Odg: Lucene upgrade

2019-12-13 Thread Jason Huynh
Hi Mario,

I think I see what is going on here.  The logic for "reindex" code was a
bit off ( it expected reindex features to be complete by a certain
release).  I have a PR on develop to adjust that calculation (
https://github.com/apache/geode/pull/4466)

The expectation is that when lucene reindex (indexing a region with a data
already in it) is enabled - any query will now throw the
LuceneIndexingInProgressException instead of possibly waiting a very long
time to receive a query result.  The tests themselves are coded to retry 10
times, knowing it will take awhile to reindex.  If you bump this number up
or, better yet, make it time based (awaitility, etc), it should get you
past this problem (once the pull request gets checked in and pulled into
your branch)

Thanks!
-Jason


On Thu, Dec 12, 2019 at 5:07 AM Mario Kevo  wrote:

> Hi Jason,
>
> Yes, the same tests failed:
>
> RollingUpgradeQueryReturnsCorrectResultAfterTwoLocatorsWithTwoServersAreRolled
>
> RollingUpgradeQueryReturnsCorrectResultsAfterServersRollOverOnPartitionRegion
>
> Sometimes this tests passed but more times it failed.
> As I said when change tests to put lower number of entries it passed
> every time or set to wait for repo in LuceneQueryFunction.java.
>
> *waitUntilFlushed* is called by *verifyLuceneQueryResults* before
> executing queries. Also tried to wait until *isIndexingInProgress* return
> false, but reached timeout and failed.
> In tests it tried to execute a query after all members are rolled.
>
> BR,
> Mario
>
> --
> *Šalje:* Jason Huynh 
> *Poslano:* 11. prosinca 2019. 23:08
> *Prima:* Mario Kevo 
> *Kopija:* geode 
> *Predmet:* Re: Odg: Odg: Odg: Lucene upgrade
>
> Hi Mario,
>
> Is the same test failing?  If it's a different test, could you tell us
> which one?
> If it's a rolling upgrade test, then we might have to mark this as
> expected behavior and modify the tests to waitForFlush (wait until the
> queue is drained).  As long as the test is able to roll all the servers and
> not get stuck waiting for a queue to flush (which will only happen once all
> the servers are rolled now).
>
> If the test hasn't rolled all the servers and is trying to execute a
> query, then we'd probably have to modify the test to not do the query in
> the middle or expect that exception to occur.
>
> Thanks,
> -Jason
>
> On Wed, Dec 11, 2019 at 6:43 AM Mario Kevo  wrote:
>
> Hi Jason,
>
> This change fix IndexFormatTooNewException, but now we have
>
>  org.apache.geode.cache.lucene.LuceneQueryException: Lucene Index is not 
> available, currently indexing
>
>
> So this means that query doesn't wait until all indexes are created.
> In *LuceneQueryFunction.java* it is set to not wait for repo 
> [*execute(context,
> false)*]. If we have a bigger queue(like in the test) it will failed as
> it will not wait until indexes are created. I also tried to put just few
> objects and it passed as it had enough time to create indexes.
> Do we need to change this part to wait for repo, or put a lower number of
> entries in tests?
>
> BR,
> Mario
>
>
>
> --
> *Šalje:* Jason Huynh 
> *Poslano:* 6. prosinca 2019. 20:53
> *Prima:* Mario Kevo 
> *Kopija:* geode 
> *Predmet:* Re: Odg: Odg: Lucene upgrade
>
> Hi Mario,
>
> I made a PR against your branch for some of the changes I had to do to get
> past the Index too new exception.  Summary - repo creation, even if no
> writes occur, appear to create some meta data that the old node attempts to
> read and blow up on.
>
> The pr against your branch just prevents the repo from being constructed
> until all old members are upgraded.
> This requires test changes to not try to validate using queries (since we
> prevent draining and repo creation, the query will just wait)
>
> The reason why you probably were seeing unsuccessful dispatches, is
> because we kind of intended for that with the oldMember check.  In-between
> the server rolls, the test was trying to verify, but because not all
> servers had upgraded, the LuceneEventListener wasn't allowing the queue to
> drain on the new member.
>
> I am not sure if the changes I added are acceptable or not -maybe if this
> ends up working then we can discuss on the dev list.
>
> There will probably be other "gotcha's" along the way...
>
>
> On Fri, Dec 6, 2019 at 1:12 AM Mario Kevo  wrote:
>
> Hi Jason,
>
> I tried to upgrade from 6.6.2 to 7.1.0 and got the following exception:
>
> org.apache.lucene.index.IndexFormatTooNewException: Format version is not 
> supported (resource BufferedChecksumIndexInput(segments_2)): 7 (needs to be 
> bet

Re: Reviewer for GEODE-7534: Add example for query with bind params (documentation)

2019-12-17 Thread Jason Huynh
Hi Alberto,

Looks like Dan and Dave have both reviewed it.  I took a look and didn't
see anything wrong so I squash merged it in.

Thanks!
-Jason

On Fri, Dec 13, 2019 at 3:30 AM Alberto Gomez 
wrote:

> Hi,
>
> I'd appreciate some extra reviewer (I already had one, thanks @Dave
> Barnes) and if everything is ok someone to merge the following pull request:
>
> https://github.com/apache/geode/pull/4452
>
> Thanks,
>
> Alberto
>
>


Re: Proposal to including GEODE-7593 in release/1.11.0

2019-12-19 Thread Jason Huynh
+1

On Thu, Dec 19, 2019 at 10:05 AM Owen Nichols  wrote:

> GEODE-7593 fixes a memory leak where indexes could retain references to
> pdx values when eviction should have released that memory.
>
> This is not a new issue, but is critical because system stability is
> threatened when eviction does not release memory as expected.
>
> The SHA is 1beec9e3930a071031b960f045874fb337e72e7c.


Re: [DISCUSS] abandon branch protection rules

2019-12-27 Thread Jason Huynh
I feel the frustration at times, but I do also think the ci/pipelines are
improving, breaking less often.  I'm ok with the way things are for the
moment

On Fri, Dec 27, 2019 at 1:47 PM Owen Nichols  wrote:

> In October we agreed to require at least 1 reviewer and 4 passing PR
> checks before a PR can be merged.  Now that we’re tried it for a few
> months, do we like it?
>
> I saw some strong opinions on the dev list recently:
>
> > Changes to the infrastructure to flat out prevent things that should be
> self policing is annoying. This PR review lock we have had already cost us
> valuable time waiting for PR pipelines to pass that have no relevance to
> the commit, like CI work. I hate to see process enforced that keeps us from
> getting work done when necessary.
>
>
> and
>
> > I think we're getting more and more bureaucratic in our process and that
> it stifles productivity.  I was recently forced to spend three days fixing
> tests in which I had changed an import statement before they would pass
> stress testing.  I'm glad the tests now pass reliably but I was very
> frustrated by the process.
>
>
> Just wondering if others feel the same way.  Is it time to make some
> changes?
>
> -Owen


Re: [DISCUSS] abandon branch protection rules

2019-12-27 Thread Jason Huynh
Just to add more flavor to my previous response... I currently have a PR
open that modified a method signature that touched a few WAN tests.  It was
a simple change, removing an unused parameter.  StressNewTest failed and I
had to spend another day figuring out 10 or so different failures.  A waste
of time?  Maybe..  At first, I wasn't going to continue, but after trying a
few things, it looks like the tests installed a listener that was hampering
other tests.  At the end (soon once it gets reviewed/merged), we end up
with a Green PR and hopefully have unblocked others on these specific tests
in the future.

On Fri, Dec 27, 2019 at 2:58 PM Jason Huynh  wrote:

> I feel the frustration at times, but I do also think the ci/pipelines are
> improving, breaking less often.  I'm ok with the way things are for the
> moment
>
> On Fri, Dec 27, 2019 at 1:47 PM Owen Nichols  wrote:
>
>> In October we agreed to require at least 1 reviewer and 4 passing PR
>> checks before a PR can be merged.  Now that we’re tried it for a few
>> months, do we like it?
>>
>> I saw some strong opinions on the dev list recently:
>>
>> > Changes to the infrastructure to flat out prevent things that should be
>> self policing is annoying. This PR review lock we have had already cost us
>> valuable time waiting for PR pipelines to pass that have no relevance to
>> the commit, like CI work. I hate to see process enforced that keeps us from
>> getting work done when necessary.
>>
>>
>> and
>>
>> > I think we're getting more and more bureaucratic in our process and
>> that it stifles productivity.  I was recently forced to spend three days
>> fixing tests in which I had changed an import statement before they would
>> pass stress testing.  I'm glad the tests now pass reliably but I was very
>> frustrated by the process.
>>
>>
>> Just wondering if others feel the same way.  Is it time to make some
>> changes?
>>
>> -Owen
>
>


Re: RFC - Logging to Standard Out

2020-01-08 Thread Jason Huynh
+1

On Wed, Jan 8, 2020 at 1:21 PM Dan Smith  wrote:

> +1. Looks good!
>
> -Dan
>
> On Wed, Jan 8, 2020 at 12:56 PM Blake Bender  wrote:
>
> > +1 - this is also a todo item for the native client, I think.  NC has a
> bug
> > in logging which is in my top 3 for "most irritating," as well, which is
> > that logging actually starts *before* the logging system is initialized,
> so
> > even if you *do* configure a log file, if something happens in NC prior
> to
> > that it gets logged to stdout.  Similarly, logging can continue in NC
> > *after* the logger is shut down, and any logging after that also goes to
> > stdout.  I'm a big fan of making everything consistent, and this seems as
> > good a way as any.
> >
> > Just FWIW, using the character '-' anywhere in a log file name for NC
> will
> > currently cause a segfault, so this will force us to fix that problem as
> > well.
> >
> >
> > On Wed, Jan 8, 2020 at 12:39 PM Jacob Barrett 
> wrote:
> >
> > > Please see RFC for Logging to Standard Out.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out
> > > <
> >
> https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out
> > > >
> > >
> > > Please comment by 1/21/2020.
> > >
> > > Thanks,
> > > Jake
> > >
> > >
> >
>


Re: Odg: ParallelGatewaySenderQueue implementation

2020-01-27 Thread Jason Huynh
Some additional info/context from the PR that is blocked by this issue:

Although we have GFSH stop, it can still be used on an individual node.  We
just publish a caution, but it looks like we still allowed it due to having
some users using it:

CAUTION: Use caution with the stop gateway-sender command (or equivalent
GatewaySender.stop() API) on parallel gateway senders. Instead of stopping
an individual parallel gateway sender on a member, we recommend shutting
down the entire member to ensure that proper failover of partition region
events to other gateway sender members. Using this command on an individual
parallel gateway sender can occur in event loss. See Stopping Gateway
Senders for more details.

There were some issues with the PR(https://github.com/apache/geode/pull/4387)
when close is implemented.  It doesn't allow a single sender to be shut
down on a node.  I do know of some users that rely on this behavior,
whether they should be able to or not, they have used this in the past
(which is why we added the test
shuttingOneSenderInAVMShouldNotAffectOthersBatchRemovalThread)

The close in combination with stopping gateways senders can cause odd
issues, like PartitionedOfflineExceptions, RegionDestroyedExceptions or
behavior like this test is exhibiting. We have some internal applications
that are running into these types of issues with this diff as well.



On Mon, Jan 27, 2020 at 10:09 AM Dan Smith  wrote:

> Hi Mario,
>
> That bug number is from an old version of GemFire before it was open
> sourced as geode.
>
> Looking at some of the old bug info, it looks like the bug had to do with
> the fact that calling stop on the region was causing there to be unexpected
> RegionDestroyedException's to be thrown when the queue was stopped *on one
> member*. Now that we have "gfsh stop" to stop the queue everywhere, it's
> not clear to me that closing the region would be a problem - it seems like
> the right thing to do if that will make the behavior more consistent with
> serial senders.
>
> -Dan
>
> On Fri, Jan 24, 2020 at 2:39 AM Mario Ivanac 
> wrote:
>
> > Hi geode dev,
> >
> > Do you know more info regarding this bug  49060,
> > because I think this the cause of issue
> > https://issues.apache.org/jira/browse/GEODE-7441.
> >
> > When closing of region is returned (at stoping of parallel GW sender),
> > persistent parallel GW sender queue is restored after restart.
> >
> > BR,
> > Mario
> > 
> > Šalje: Mario Ivanac
> > Poslano: 11. studenog 2019. 13:29
> > Prima: dev@geode.apache.org 
> > Predmet: ParallelGatewaySenderQueue implementation
> >
> > Hi geode dev,
> >
> > I am investigating SerialGatewaySenderQueue and
> ParallelGatewaySenderQueue
> > implementation,
> >
> > and I found that in ParallelGatewaySenderQueue.close() function,
> > code is deleted and comment is left:
> >
> > // Because of bug 49060 do not close the regions of a parallel queue
> >
> > My question is, where can I find some more info regarding this bug.
> >
> > BR,
> > Mario
> >
>


Re: OQL Method Authorizer Blog

2020-02-14 Thread Jason Huynh
Great job Juan!  Very informative and detailed read.

On Fri, Feb 14, 2020 at 4:43 AM Nabarun Nag  wrote:

> Hi Geode Community,
>
> Please do visit the blog that Juan Ramos has put up on the OQL Method
> Authorizer :
>
> https://jujoramos.blogspot.com/2020/02/pluggable-oql-method-authorization.html
>
> Thank you Juan for this effort.
>
> Regards
> Nabarun Nag
>


Re: Question about Hash Indexes

2020-02-19 Thread Jason Huynh
Hi Mario,

>From my understanding:
1.) The Hash Index was implemented unlike a traditional hash index, instead
it is more of a memory saving index that uses hashing to not store keys.
When the backing data structure needs to expand, it needs to rehash a lot
of data and this can be detrimental at runtime.
2.) It's been deprecated so I don't see a reason why it wouldn't be removed
in the future...
2b.) I think a true traditional hash index would be good to have in the
project and maybe we would be able to un-deprecate it instead...

Thanks,
-Jason

On Wed, Feb 19, 2020 at 1:36 AM Mario Ivanac  wrote:

> Hi geode dev,
>
> could you help us with some questions regarding Hash Indexes:
>
>   1.  Why are Hash Indexes deprecated?
>   2.  Are there any plans in the future for there removal?
>
> Thanks,
> Mario
>


[PROPOSAL] Include fix for GEODE-7763 into release 1.12.0

2020-03-18 Thread Jason Huynh
Hello Dev list,

I'd like to include a fix for GEODE-7763 in release 1.12.0.
The change removes the call to exportValue, preventing a serialization,
when no clients are waiting for the specific event.
The reason why I think it should be in the release is that we noticed a
negative effect on performance for a specific use case, in 1.12 from a
change that made us more "consistent" in that use case.  This change
doesn't modify the fix much, but does bring performance back inline (if not
better) than before.

The sha is b4c3e9413f0008635b0a0c9116c96147d1e4ceec

Thanks,
-Jason


Re: [DISCUSS] Client side configuration for a SNI proxy

2020-03-19 Thread Jason Huynh
+1

On Thu, Mar 19, 2020 at 7:27 AM Joris Melchior  wrote:

> +1
>
> On Mon, Mar 16, 2020 at 6:33 PM Dan Smith  wrote:
>
> > Hi all,
> >
> > A new RFC is up for this feature
> >
> >
> https://cwiki.apache.org/confluence/display/GEODE/Client+side+configuration+for+a+SNI+proxy
> > .
> >
> >
> > Please review and comment by this Friday, 3/20/2020.
> >
> > This hopefully addresses some of the concerns with the previous RFC for
> > this feature. The new proposal is for a more general SocketFactory
> property
> > that users can implement, along the lines of what Jake and Owen
> suggested.
> >
> > -Dan
> >
>
>
> --
> *Joris Melchior *
> CF Engineering
> Pivotal Toronto
> 416 877 5427
>
> “Programs must be written for people to read, and only incidentally for
> machines to execute.” – *Hal Abelson*
> 
>


Re: [ANNOUNCE] Github Wiki and Issus are now activated on geode-kafka-connector

2020-03-24 Thread Jason Huynh
Awesome, thanks for doing all the leg work!

On Tue, Mar 24, 2020 at 1:42 PM Nabarun Nag  wrote:

> Hi,
>
> Issues are wiki pages  are now active on the geode-kafka-connector
> repository. Thank you all for the kind votes.
>
> Regards
> Nabarun Nag
>


Re: RFC - Gateway sender to deliver transaction events atomically to receivers

2020-03-25 Thread Jason Huynh
I put some comments on the proposal on the wiki.

btw what are we voting on?  Just curious as I wasn't sure if we were voting
for the current proposal or whether we should continue this discussion?

I like the idea of having transactional ops be sent together in a batch if
possible and it would be an iterative improvement, whether that is a
complete solution to a larger problem, I think might be beyond what Alberto
was proposing?

Again I am not exactly sure if this was intended to be a vote but I
would +1 the attempt and continuation of the discussion/proposal and
probably -0 the current proposal as there are some ideas/things to iron
out.




On Wed, Mar 25, 2020 at 3:49 PM Udo Kohlmeyer  wrote:

> Hi there Alberto,
>
> It's a "-1" from me.
>
> I have raised my concerns in the RFC comments. To summarize, whilst I
> like the idea (I had never thought of that problem you are trying to
> solve), I don't know how this will behave at scale. Just looking at some
> of the comments, I think it is safe to say that many have similar feelings.
>
> I like the notion of this proposal, but I'm not convinced that the
> solution is actually going solve the problem. I think it might solve
> only a very small part of the problem.
>
> In essence you are proposing a distributed transaction over WAN and I
> don't see enough in the proposal to convince me that we have a solution
> that will solve this problem.
>
> --Udo
>
> On 3/25/20 8:04 AM, Alberto Gomez wrote:
> > Hi,
> >
> > Could you please review the RFC for "Gateway sender to deliver
> transaction events atomically to receivers"?
> >
> >
> https://cwiki.apache.org/confluence/display/GEODE/Gw+sender+to+deliver+transaction+events+atomically+to+receivers
> >
> > Deadline for comments is Wednesday, April 1st, 2020,
> >
> > Thanks,
> >
> > Alberto G.
> >
>


Re: [Discuss] Cache.close synchronous is not synchronous, but code still expects it to be....

2020-04-14 Thread Jason Huynh
The isClosed flag and method might be used currently more as an isClosing.
The GemFireCacheImpl.isClosed() method is actually returning isClosing.
Whatever change to isClosed that will be made, will have to properly handle
cases where it's meant to be treated as isClosing().

On Tue, Apr 14, 2020 at 3:09 PM Mark Hanson  wrote:

> Hi Jake,
>
> For Option 6: We could fix isClosed as well. That is a great suggestion.
> Currently, it returns almost immediately.
> I like your options though
>
> Any other thoughts?
>
> Any preferences? It think any of the current options seem better than the
> current situation as long as we fix isClosed.
>
> Thanks,
> Mark
> 
> From: Jacob Barrett 
> Sent: Tuesday, April 14, 2020 2:30 PM
> To: dev@geode.apache.org 
> Subject: Re: [Discuss] Cache.close synchronous is not synchronous, but
> code still expects it to be
>
> Option 4: Cache.closeAndWait(long timeout, TimeUnit unit) - Closes and
> waits until it is really closed.
> Option 5: Cache.close(Runnable closedCalleback) - Runs callback after
> cache is really close.
> Option 6: cache.close(); while (!cache.isClosed());
>
>
> > On Apr 14, 2020, at 2:11 PM, Mark Hanson  wrote:
> >
> > Hi All,
> >
> > I know that we have discussed this once before, but I think it bears
> repeating. We have test code that assumes cache.close is synchronous. It is
> not. Not even close. I would like discuss some possible changes.
> >
> > Option 1. Call it what it is.  Deprecate Cache.close and create a new
> method called asyncClose to replace it. Simple and descriptive.
> > Option 2. Fix cache close so it is synchronous. Some might say that we
> are going to break behavior, but I doubt any user relies on the fact that
> it is asynchronous. That would be dangerous in and of itself. Obviously, we
> don’t want to change behavior, but there have been a number of distributed
> tests that have failed for this. If internal to the code devs don’t get it
> right, where does that leave users.
> > Option 3. Status quo.
> >
> > What do you think? Are there other options I am missing?
> >
> > Thanks,
> > Mark
> >
>
>


[PROPOSAL] Backport usability improvements to support 1.13 branch

2020-09-23 Thread Jason Huynh
Hello,

I’d like to merge the pull request: https://github.com/apache/geode/pull/5524 
into a support 1.13 branch.  The commits are focused on a few usability 
improvements for Geode that were thought to have made it into 1.13 but actually 
did not make it.

What this pull request back ports:

  *   GEODE-8203: Logging to std out along with to the regular log file
  *   GEODE-8283: Rest API for disk store creation
  *   GEODE-8200: Fix for Rebalance API stuck “IN_PROGRESS” state forever and 
GEODE-8200: Enhance GfshRule
  *   GEODE-8241: Locator observers locator-wait-time
  *   GEODE-8078: Log and report error at the correct place


The PR pipeline is failing due to Redis tests (that I don’t think are on 1.13). 
 Everything else appears to be passing.

Thanks,
-Jason



Re: [VOTE] Apache Geode 1.13.1.RC2

2020-11-17 Thread Jason Huynh
+1
Ran gfsh, created cluster, create region

On 11/16/20, 10:29 AM, "Dan Smith"  wrote:

+1

Looks good to me! I ran the geode-release-check against it, looked for 
binary artifacts, checked the pipeline.

-Dan

From: Dick Cavender 
Sent: Thursday, November 12, 2020 5:00 PM
To: dev@geode.apache.org 
Subject: [VOTE] Apache Geode 1.13.1.RC2

Hello Geode Dev Community,

This is a release candidate for Apache Geode version 1.13.1.RC2.
Issues with creation of RC1 forced moving to RC2.
Thanks to all the community members for their contributions to this release!

Please do a review and give your feedback, including the checks you 
performed.

Voting deadline:
3PM PST Tue, November 17 2020.

Please note that we are voting upon the source tag:
rel/v1.13.1.RC2

Release notes:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FRelease%2BNotes%23ReleaseNotes-1.13.1&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637411481630327335%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=1jtu0Qo%2Baze2yaUbqsLs1ivDro9tyH%2BWfjMfpWmoNTg%3D&reserved=0

Source and binary distributions:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Fdev%2Fgeode%2F1.13.1.RC2%2F&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637411481630327335%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=aU1CZ%2FNC6B%2BokNl%2FVswtVtBoFc%2BqohZguOLY1qvJvog%3D&reserved=0

Maven staging repo:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachegeode-1071&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637411481630337330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=6lqPSLwbp78r8ZEUP9NrHPuRMIVm7xkc%2BneaxXzyiQk%3D&reserved=0

GitHub:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Ftree%2Frel%2Fv1.13.1.RC2&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630337330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Plkj86yWGRDQguyVg1I9IGyvk1sJToPW8T6sa8IlfsQ%3D&reserved=0

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode-examples%2Ftree%2Frel%2Fv1.13.1.RC2&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630337330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=W6%2FyPxjiHnZAk%2BumPL4BT61wbXqlhx7mfR7VwMxiyvo%3D&reserved=0

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode-native%2Ftree%2Frel%2Fv1.13.1.RC2&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630337330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=FwzxZbgDTbqEc7YQHzsfd53Nv9p9fAf5iSAqIjLIfWs%3D&reserved=0

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode-benchmarks%2Ftree%2Frel%2Fv1.13.1.RC2&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630337330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=sPa9eji7at1LRVP%2FHXb0YxlzTJSMUg9WUrmRD%2BnI42c%3D&reserved=0

Pipelines:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fconcourse.apachegeode-ci.info%2Fteams%2Fmain%2Fpipelines%2Fapache-support-1-13-main&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630347325%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=pZoNJcuQZ8hTFZ%2FZChPKPdtb7NWtJx8Oipy0ok0InwQ%3D&reserved=0

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fconcourse.apachegeode-ci.info%2Fteams%2Fmain%2Fpipelines%2Fapache-support-1-13-rc&data=04%7C01%7Cjhuynh%40vmware.com%7C870aaeaa4abc481a8a1908d88a5d897d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637411481630347325%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=x1F1ACZZO0eLJvYiRUUlBquqxLtQDqrWtKUEkZZxZSs%3D&reserved=0

Geode's KEYS file containing PGP keys we use to sign the relea

Re: Kafka Summit:- Geode Kafka Connector

2021-01-07 Thread Jason Huynh
Hi Ashish,

Are you asking for someone to present, or are you asking if it is ok for you to 
present?  I don't think there are any restrictions on presenting, hopefully 
someone will correct me if I am wrong.  

I think the call for papers is still open but closing relatively soon?  So if 
you plan on presenting, you'd have to submit a paper and get it accepted.

-Jason

On 1/7/21, 10:00 AM, "aashish choudhary"  wrote:

Hi,

I was thinking if we can do a demo of geode Kafka connector in Europe Kafka
Summit.


https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsessionize.com%2Fkafka-summit-eu-2021%2F&data=04%7C01%7Cjhuynh%40vmware.com%7Cb077adc7e8754fc15f4008d8b33611a9%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637456392138304458%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZjW%2BQ08pnXAalcpyn9AmIfL7x6YSP3afWGuIHrJ27EA%3D&reserved=0

Thoughts?

With best regards,
Ashish



Re: Inputs for efficient querying

2021-02-10 Thread Jason Huynh
Hi Ankit,

I haven't had time to try this out but hopefully the answers get you on the 
correct path...

> 1. How can i form an OQL query (syntax) to fetch the latest row based on
> MAX(versionId).
1.) maybe a nested query or use order by?
“Select x,y,z from /data-region d where d.versionId > 0 order by d.versionId”
“Select x,y,z from /data-region d where d.versionId in (select max(versionId) 
from /data-region d)”

> 2. It seems *BETWEEN* support is not available, how can this be achieved.
2.) to_date might be of use or maybe you can call a method to convert the date 
to millis and do a >, < 
“Select x,y,z from /data-region d where d.date > to_date('01/11/2021’, 
'MM/dd/’) and d.date < to_date('01/12/2021’, 'MM/dd/’)”


> 3. What should be the* recommended index creation here*, for this query
> to gain fast performance.
3.) optimal indexes will probably require knowing more about the entire data 
set.
Whichever column lookup that can reduce the data down to the smallest size, 
will probably be the field to create the index on

Regards,
-Jason

On 2/9/21, 9:55 PM, "ankit Soni"  wrote:

Can some one pls guide how  functionality like "BETWEEN" operator can be
achieved using geode OQL (for Date fields).

Thanks
Ankit

On Tue, Feb 9, 2021, 11:53 PM ankit Soni  wrote:

> Thanks Dan for your input. I am able to try this at my end and it's
> working as expected.
>
> As a next steps I need to support somewhat complex queries, so updated a
> ValueObject, like
>
> public class ValueObject implements PdxSerializable {
> private static final long serialVersionUID = -754645299372860596L;
> private int versionId;   //1 for latest record; 0 for previous latest
> private String date;
> private String col_1;
> private String col_2;
> private String col_3;
> private String type;
> private Map map;
>
> public ValueObject() {
> }
>
> *Region* : *Key:* random-string and *Value:*
> ValueObject
>
> Need to support queries that fetches columns for *latest record (whose
> versionId is Max)* with filters and aggregation like
> "SELECT date, col_1, col_2, col_3<---Must be fetched from a
> row where MAX(versionId)
>  FROM /data-region d
>  WHERE d.type='t1'
>  AND d.date BETWEEN '2021-01-11' AND '2021-01-12'
> AND d.vesionId BETWEEN 0 AND 1
> AND d.col_1 IN SET ('11', '22')
> GROUP BY d.col_1"
>
> *Team, Kindly guide on following,*
> 1. How can i form an OQL query (syntax) to fetch the latest row based on
> MAX(versionId).
> 2. It seems *BETWEEN* support is not available, how can this be achieved.
> 3. What should be the* recommended index creation here*, for this query
> to gain fast performance.
> 4. Any recommendation for Key, currently it's a random string.
>
> Any suggestions on above will be really helpful.
>
> Thank you
> Ankit.
>
> On Fri, 29 Jan 2021 at 23:23, Dan Smith  wrote:
>
>> For the best performance, you should store column2 as a java Map instead
>> of a String which contains a json document. If column2 was Map> String>, you could do a query like this:
>>
>>
>> SELECT * FROM /exampleRegion r WHERE r.column2['k1'] IN SET('v10', 'v15',
>> 'v7')"
>>
>> You can create an index on the map to optimize this sort of query
>>
>> gfsh>create index --name="IndexName" --expression="r.column2[*]"
>> --region="/exampleRegion r"
>>
>> This page might be helpful
>>
>>
>> 
https://geode.apache.org/docs/guide/112/developing/query_index/creating_map_indexes.html
>>
>> In addition, I noticed that your value implements Serializable. You will
>> get better performance out of the query engine if you configure PDX
>> serialization for your object, either by configuring the auto serializer 
or
>> implementing PdxSerializable. That avoids the need to deserialize your
>> entire value on the server to query/index it.
>>
>> -Dan
>>
>>
>> 
>> From: ankit Soni 
>> Sent: Friday, January 29, 2021 9:32 AM
>> To: dev@geode.apache.org 
>> Subject: Inputs for efficient querying
>>
>> Hello Team,
>>
>> I am loading data into Geode (V 1.12) with the following *Key (of type
>> String)* and *value (custom java object - ValueObject)*.
>>
>> *public class ValueObject implements Serializable {*
>> * private int id;*
>>
>> * private String keyColumn;   <- Region.Key *
>>
>> * private String column_2; <-- Json document*
>> * private String column_3;*
>>
>> * private String column_4*
>>
>>
>>
>> * //few more string type members*
>> *}*
>>
>> *Keycolum* is a norm

Re: Question about Map indexes

2021-02-11 Thread Jason Huynh
Hi Alberto,

I haven't checked the PR yet, just read through the email.  The first thought 
that comes to mind is when someone does a != query.  The index still has to 
supply the correct answer to the query (all entries with null or undefined 
values possibly)

I'll try to think of other cases where it might matter.  There may be other 
ways to execute the query but it would probably take a bit of reworking.. (ill 
check your pr to see if this is already addressed.   Sorry if it is!)

-Jason

On 2/11/21, 8:28 AM, "Alberto Gomez"  wrote:

Hi,

We have observed that creating an index on a Map field causes the creation 
of an index entry for every entry created in the region containing the Map, no 
matter if the Map field contained the key used in the index.
Nevertheless, we would expect that only entries whose Map field contain the 
key used in the index would have the corresponding index entry. With this 
behavior, the memory consumed by the index could be much higher than needed 
depending on the percentage of entries whose Map field contained the key in the 
index.

---
Example:
We have a region with entries whose key type is a String and the value type 
is an object with a field called "field1" of Map type.

We expect to run queries on the region like the following:

SELECT * from /example-region1 p WHERE p.field1['mapkey1']=$1"

We create a Map index to speed up the above queries:

gfsh> create index --name=myIndex --expression="r.field1['mapkey1']" 
--region="/example-region1 r"

We do the following puts:
- Put entry with key="key1" and with value=
- Put entry with key="key2" and with value=

The observation is that Geode creates two index entries for each entry. For 
the first entry, the internal indexKey is "key1" and for the second one, the 
internal indexKey is null.

These are the stats shown by gfsh after doing the above puts:

gfsh>list indexes --with-stats=yes
Member Name |Member ID|   Region Path|  
 Name   | Type  | Indexed Expression  |From Clause | Valid Index | Uses 
| Updates | Update Time | Keys | Values
--- | --- |  | 
 | - | - | -- | 
--- |  | --- | --- |  | --
server1 | 192.168.0.26(server1:1109606):41000 | /example-region1 | 
mapIndex | RANGE | r.field1['mapkey1'] | /example-region1 r | true| 1   
 | 1   | 0   | 1| 1
server2 | 192.168.0.26(server2:1109695):41001 | /example-region1 | 
mapIndex | RANGE | r.field1['mapkey1'] | /example-region1 r | true| 1   
 | 1   | 0   | 1| 1
---

Is there any reason why Geode would create an index entry for the second 
entry given that the Map field does not contain the key in the Map index?

I have created a draft pull request changing the behavior of Geode to not 
create the index entry when the Map field does not contain the key used in the 
index. Only two Unit test cases had to be adjusted. Please see: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F6028&data=04%7C01%7Cjhuynh%40vmware.com%7C0957cc0ef91b4b23116408d8ceaa0a8d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637486577011301177%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2WDUj6NPEnfX3AXH72MTZYx%2FbXMPJQlVZeKq7KsJDTw%3D&reserved=0

With this change and the same scenario as the one in the example, only one 
index entry is created. The stats shown by gfsh after the change are the 
following:

gfsh>list indexes --with-stats=yes
Member Name |Member ID|   Region Path|  
 Name   | Type  | Indexed Expression  |From Clause | Valid Index | Uses 
| Updates | Update Time | Keys | Values
--- | --- |  | 
 | - | - | -- | 
--- |  | --- | --- |  | --
server1 | 192.168.0.26(server1:1102192):41000 | /example-region1 | 
mapIndex | RANGE | r.field1['mapkey1'] | /example-region1 r | true| 2   
 | 1   | 0   | 0| 0
server2 | 192.168.0.26(server2:1102279):41001 | /example-region1 | 
mapIndex | RANGE | r.field1['mapkey1'] | /example-region1 r | true| 2   
 | 1   | 0   | 1| 1


Could someone tell if the current behavior is not correct or if I am 
missing something and with the change I am proposing something else will stop 
working?

Thanks in advance,

/Alberto G.



Re: Last call on 1.2.0 (and fixing test failures)

2017-05-15 Thread Jason Huynh
GEODE-2900 would be nice to get in for Lucene integration.  I hope to get
it checked in today

On Mon, May 15, 2017 at 10:24 AM Swapnil Bawaskar 
wrote:

> I think we should also wait for GEODE-2836
>
> On Mon, May 15, 2017 at 8:58 AM Karen Miller  wrote:
>
> > Let's finish GEODE-2913, the documentation for improvements made to the
> > Lucene integration and include it with the 1.2.0 release!
> >
> >
> > On Sun, May 14, 2017 at 6:47 PM, Anthony Baker 
> wrote:
> >
> > > Hi everyone,
> > >
> > > Our last release was v1.1.1 in March.  We have made a lot of great
> > > progress on the develop branch with over 250 issues fixed.  It would be
> > > great to get those changes into a release.  What’s left before we are
> > ready
> > > to release 1.2.0?
> > >
> > > Note that we need a clean test run before releasing (except for “flaky"
> > > tests).  We haven’t had one of those in awhile [1].
> > >
> > > Anthony
> > >
> > > [1] https://builds.apache.org/job/Geode-nightly/
> > > lastCompletedBuild/testReport/
> > >
> > >
> >
>


Re: Last call on 1.2.0 (and fixing test failures)

2017-05-15 Thread Jason Huynh
GEODE-2900 has been checked into develop

On Mon, May 15, 2017 at 3:47 PM Bruce Schuchardt 
wrote:

> Yes, GEODE-2915 needs to be fixed for 1.2.0
>
> Le 5/15/2017 à 11:59 AM, Lynn Hughes-Godfrey a écrit :
> > GEODE-2915: Messages rejected due to unknown "vmkind"
> >
> >
> >
> > On Mon, May 15, 2017 at 11:26 AM, Jason Huynh  wrote:
> >
> >> GEODE-2900 would be nice to get in for Lucene integration.  I hope to
> get
> >> it checked in today
> >>
> >> On Mon, May 15, 2017 at 10:24 AM Swapnil Bawaskar  >
> >> wrote:
> >>
> >>> I think we should also wait for GEODE-2836
> >>>
> >>> On Mon, May 15, 2017 at 8:58 AM Karen Miller 
> wrote:
> >>>
> >>>> Let's finish GEODE-2913, the documentation for improvements made to
> the
> >>>> Lucene integration and include it with the 1.2.0 release!
> >>>>
> >>>>
> >>>> On Sun, May 14, 2017 at 6:47 PM, Anthony Baker 
> >>> wrote:
> >>>>> Hi everyone,
> >>>>>
> >>>>> Our last release was v1.1.1 in March.  We have made a lot of great
> >>>>> progress on the develop branch with over 250 issues fixed.  It would
> >> be
> >>>>> great to get those changes into a release.  What’s left before we are
> >>>> ready
> >>>>> to release 1.2.0?
> >>>>>
> >>>>> Note that we need a clean test run before releasing (except for
> >> “flaky"
> >>>>> tests).  We haven’t had one of those in awhile [1].
> >>>>>
> >>>>> Anthony
> >>>>>
> >>>>> [1] https://builds.apache.org/job/Geode-nightly/
> >>>>> lastCompletedBuild/testReport/
> >>>>>
> >>>>>
>
>


Re: Broken: apache/geode#2632 (develop - 7da9047)

2017-05-22 Thread Jason Huynh
I've reverted all the changes related to this failed pr merge (my fault).
I've asked Deepak to resubmit the PR

On Mon, May 22, 2017 at 4:38 PM Travis CI  wrote:

> Build Update for apache/geode
> -
>
> Build: #2632
> Status: Broken
>
> Duration: 9 minutes and 51 seconds
> Commit: 7da9047 (develop)
> Author: Jason Huynh
> Message: GEODE-269: Removing deprecated API's from FunctionService.
>
> This closes #511
>
> View the changeset:
> https://github.com/apache/geode/compare/4e3a4d762d7c...7da9047a21f6
>
> View the full build log and details:
> https://travis-ci.org/apache/geode/builds/234991181?utm_source=email&utm_medium=notification
>
> --
>
> You can configure recipients for build notifications in your .travis.yml
> file. See https://docs.travis-ci.com/user/notifications
>
>


Re: Build failed in Jenkins: Geode-nightly #864

2017-06-12 Thread Jason Huynh
I've created a ticket for the failing wan test: GEODE-3066


On Mon, Jun 12, 2017 at 8:34 AM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> --
> [...truncated 96.86 KB...]
> :geode-cq:assemble
> :geode-cq:compileTestJavaNote: Some input files use or override a
> deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> Note: Some input files use unchecked or unsafe operations.
> Note: Recompile with -Xlint:unchecked for details.
>
> :geode-cq:processTestResources
> :geode-cq:testClasses
> :geode-cq:checkMissedTests
> :geode-cq:spotlessJavaCheck
> :geode-cq:spotlessCheck
> :geode-cq:test
> :geode-cq:check
> :geode-cq:build
> :geode-cq:distributedTest
> :geode-cq:integrationTest
> :geode-json:assemble
> :geode-json:compileTestJava UP-TO-DATE
> :geode-json:processTestResources
> :geode-json:testClasses
> :geode-json:checkMissedTests UP-TO-DATE
> :geode-json:spotlessJavaCheck
> :geode-json:spotlessCheck
> :geode-json:test UP-TO-DATE
> :geode-json:check
> :geode-json:build
> :geode-json:distributedTest UP-TO-DATE
> :geode-json:integrationTest UP-TO-DATE
> :geode-junit:javadoc
> :geode-junit:javadocJar
> :geode-junit:sourcesJar
> :geode-junit:signArchives SKIPPED
> :geode-junit:assemble
> :geode-junit:compileTestJava
> :geode-junit:processTestResources UP-TO-DATE
> :geode-junit:testClasses
> :geode-junit:checkMissedTests
> :geode-junit:spotlessJavaCheck
> :geode-junit:spotlessCheck
> :geode-junit:test
> :geode-junit:check
> :geode-junit:build
> :geode-junit:distributedTest
> :geode-junit:integrationTest
> :geode-lucene:assemble
> :geode-lucene:compileTestJavaNote: Some input files use or override a
> deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> Note: Some input files use unchecked or unsafe operations.
> Note: Recompile with -Xlint:unchecked for details.
>
> :geode-lucene:processTestResources
> :geode-lucene:testClasses
> :geode-lucene:checkMissedTests
> :geode-lucene:spotlessJavaCheck
> :geode-lucene:spotlessCheck
> :geode-lucene:test
> :geode-lucene:check
> :geode-lucene:build
> :geode-lucene:distributedTest
> :geode-lucene:integrationTest
> :geode-old-client-support:assemble
> :geode-old-client-support:compileTestJavaNote: <
> https://builds.apache.org/job/Geode-nightly/ws/geode-old-client-support/src/test/java/com/gemstone/gemfire/cache/execute/FunctionExceptionTest.java>
> uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
>
> :geode-old-client-support:processTestResources UP-TO-DATE
> :geode-old-client-support:testClasses
> :geode-old-client-support:checkMissedTests
> :geode-old-client-support:spotlessJavaCheck
> :geode-old-client-support:spotlessCheck
> :geode-old-client-support:test
> :geode-old-client-support:check
> :geode-old-client-support:build
> :geode-old-client-support:distributedTest
> :geode-old-client-support:integrationTest
> :geode-old-versions:javadoc UP-TO-DATE
> :geode-old-versions:javadocJar
> :geode-old-versions:sourcesJar
> :geode-old-versions:signArchives SKIPPED
> :geode-old-versions:assemble
> :geode-old-versions:compileTestJava UP-TO-DATE
> :geode-old-versions:processTestResources UP-TO-DATE
> :geode-old-versions:testClasses UP-TO-DATE
> :geode-old-versions:checkMissedTests UP-TO-DATE
> :geode-old-versions:spotlessJavaCheck
> :geode-old-versions:spotlessCheck
> :geode-old-versions:test UP-TO-DATE
> :geode-old-versions:check
> :geode-old-versions:build
> :geode-old-versions:distributedTest UP-TO-DATE
> :geode-old-versions:integrationTest UP-TO-DATE
> :geode-pulse:assemble
> :geode-pulse:compileTestJavaNote: <
> https://builds.apache.org/job/Geode-nightly/ws/geode-pulse/src/test/java/org/apache/geode/tools/pulse/tests/ui/PulseBase.java>
> uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> Note: <
> https://builds.apache.org/job/Geode-nightly/ws/geode-pulse/src/test/java/org/apache/geode/tools/pulse/controllers/PulseControllerJUnitTest.java>
> uses unchecked or unsafe operations.
> Note: Recompile with -Xlint:unchecked for details.
>
> :geode-pulse:processTestResources
> :geode-pulse:testClasses
> :geode-pulse:checkMissedTests
> :geode-pulse:spotlessJavaCheck
> :geode-pulse:spotlessCheck
> :geode-pulse:test
> :geode-pulse:check
> :geode-pulse:build
> :geode-pulse:distributedTest
> :geode-pulse:integrationTest
> :geode-rebalancer:assemble
> :geode-rebalancer:compileTestJava
> :geode-rebalancer:processTestResources UP-TO-DATE
> :geode-rebalancer:testClasses
> :geode-rebalancer:checkMissedTests
> :geode-rebalancer:spotlessJavaCheck
> :geode-rebalancer:spotlessCheck
> :geode-rebalancer:test
> :geode-rebalancer:check
> :geode-rebalancer:build
> :geode-rebalancer:distributedTest
> :geode-rebalancer:integrationTest
> :geode-wan:assemble
> :geode-wan:compileTestJavaNote: Some input files use or

Re: continuous query internal mechanism questions

2017-08-15 Thread Jason Huynh
I am not quite sure how native client registers cqs. From my understanding:
with the java api, I believe there is only one message (ExecuteCQ message)
that is executed on the server side and then replicated to the other nodes
through the profile (OperationMessage).

It seems the extra ExecuteCQ message failing and then closing the cq might
be putting the system in a weird state...

On Tue, Aug 15, 2017 at 7:56 AM Roi Apelker  wrote:

> Hi,
>
> I have been examining the continuous query registration mechanism for
> quite some time
> This is related to an issue that I have, where sometimes a node crashes (1
> node out of 2), and the other one does not send CQ events. The CQ is
> registered on a partitioned region which resides on these 2 nodes.
>
> I noticed the following behavior, and I wonder if anyone can comment
> regarding it, if it is justified or not and what is the reason:
>
> 1. When the software using the client (native client) registers for the
> CQ, a CQ command (ExecuteCQ61) is received on both servers.
>  -- is this normal behaviour? Does the client actually send this command
> to both servers?
>
> 2. When this command is received by a server, and the CQ is registered,
> another registration message is sent to the other node via an
> OperationMessage (REGISTER_CQ)
>  -- it seems that regularly, the server can handle this situation as the
> second registration identifies the previous one and does not affect it. but
> the question, why do we need this 2nd registration, if there is a command
> sent to each server?
>
> 3. For some reason, sometimes there is a failure to complete the first
> registration (executed by ExecuteCQ61) and then this failure causes a
> closure to the CQ, which is accompanied with a close request to the other
> node.
>  -- I assume by now, since 2 registrations and one closure have occurred
> on node 2, the CQ is still active and the client receives notifications.
>
> 4. Sometimes, 1 out of 5, once node 1 crashes, I get a cleanup operation,
> caused by the crash (via MemberCrashedEvent), and this also closes the
> existing CQ, and in this case the CQ in node 2 does not operate anymore and
> the client receives no notifications.
>  -- fact is, that 4 out of 4 times, I do not get this cleanup by
> MemberCrashedEvent (maybe due to some other error), and that the CQ
> notifications are received normally.
>
> Can anyone clear things up for me? Any comment on any of the statements
> above will be greatly appreciated.
>
> Thanks,
>
> Roi
>
>
> -Original Message-
> From: Roi Apelker
> Sent: Wednesday, August 09, 2017 3:21 PM
> To: dev@geode.apache.org
> Subject: RE: continuous query internal mechanism
>
> Dhanyavad
>
> -Original Message-
> From: Anilkumar Gingade [mailto:aging...@pivotal.io]
> Sent: Tuesday, August 08, 2017 9:55 PM
> To: dev@geode.apache.org
> Subject: Re: continuous query internal mechanism
>
> Registered events, i meant, are events generated for interest registration
> "region.registerInterest(*)". And CqEvents are for CQs registered.
>
> -Anil.
>
>
> On Tue, Aug 8, 2017 at 12:27 AM, Roi Apelker 
> wrote:
>
> > Shukriya
> >
> > What is the difference between registered events and CQ events?
> >
> > -Original Message-
> > From: Anilkumar Gingade [mailto:aging...@pivotal.io]
> > Sent: Monday, August 07, 2017 10:12 PM
> > To: dev@geode.apache.org
> > Subject: Re: continuous query internal mechanism
> >
> > CQ Processing on server side is same for all clients (Java, C++)...
> >
> > The subscription events are sent to client as ClientUpdateMessage,
> > which holds information about registered events and CQ events. The
> > client process this and updates/invokes the client side
> > cache/listeners with respective event. Look into
> > ClientUpdateMessageImpl and CacheClientUpdater (for client side
> processing).
> >
> > -Anil.
> >
> >
> >
> >
> > On Mon, Aug 7, 2017 at 11:01 AM, Roi Apelker 
> > wrote:
> >
> > > Thanks,
> > >
> > > By the way, is there any difference in the behaviour of the server,
> > > if the client that registered the CQ is a native (C++) client?
> > >
> > > I have been going over the classes and code for some time and can't
> > > seem to find the actual location where a CQ update/notification is
> > sent...
> > >
> > > It's like CqEventImpl class is never even generated in this scenario.
> > >
> > > If anyone can help here I would be most grateful :-)
> > >
> > > Thanks
> > >
> > > Roi
> > >
> > >
> > >
> > > -Original Message-
> > > From: Anilkumar Gingade [mailto:aging...@pivotal.io]
> > > Sent: Monday, August 07, 2017 8:23 PM
> > > To: dev@geode.apache.org
> > > Subject: Re: continuous query internal mechanism
> > >
> > > You can find those in CqServiceImpl.process*()...
> > >
> > > -Anil.
> > >
> > >
> > > On Mon, Aug 7, 2017 at 9:14 AM, Roi Apelker 
> > > wrote:
> > >
> > > > Hello,
> > > >
> > > > I am trying to look into the code of the continuous query
> > > > mechanism
> > > > - where the GEODE server sends t

Re: Indxes and hints

2017-08-28 Thread Jason Huynh
Hi Roi,

Answers are below the questions...

Question 1. Is it true to say, that the query as it is will load all the
data values from the file, since the field C is part of the value, which is
already persisted to file?

Depending on if an index is used or not, if an index is used, the values
that are part of the results will need to be loaded to actually return a
result.  If an index is not used, then the all the values would need to be
loaded to actually have something to evaluate the filter criteria on.

Question 2. If I add a hint on A and B, will it mean that there will be a
"2 phase search", first the select on A and B, and then, only on the
results, on the field C? (this way, not all records will be loaded from
file, only those that suit the A and B condition)

Depending on the  query, it could use one, or more.  If it's a query with
only AND clauses, it should just choose one and then evaluate the other
filters on the subset that is returned from the index.

Question 3. Is it possible to define an index on a value field? (i.e. not
from the key) - will it work exactly like defining one form the key or are
three any limitations? (again, I am looking to overcome the situation,
where as it seems, the records are loaded unnecessarily from disk)

Yes, indexes can be defined on fields in the value.  It will work the same.


If you are sure you are already using an index in the query and still
loading every value for every execution of that query, there may be
something weird going on...

On Sun, Aug 27, 2017 at 2:55 AM Roi Apelker  wrote:

> Hi,
>
> I have a few questions regarding indexes and hints, if someone could
> confirm the below it would be great:
>
> - I have a situation where I use 3 field values in a select (something
> like select where A>1, B>1, C=true)
> - A and B are fields on the key, and C is a field on the value.
> - A and B are indexes
> - I am looking for the most efficient way to execute the query above, in
> the situation where there is overflow to eviction files, meaning some of
> the data has already been evicted to a file, which slows down the select
> considerably (this is not persistence, but overflow).
>
>
> 1. Is it true to say, that the query as it is will load all the data
> values from the file, since the field C is part of the value, which is
> already persisted to file?
> 2. If I add a hint on A and B, will it mean that there will be a "2 phase
> search", first the select on A and B, and then, only on the results, on the
> field C? (this way, not all records will be loaded from file, only those
> that suit the A and B condition)
> 3. Is it possible to define an index on a value field? (i.e. not from the
> key) - will it work exactly like defining one form the key or are three any
> limitations? (again, I am looking to overcome the situation, where as it
> seems, the records are loaded unnecessarily from disk)
>
> Thank you,
>
> Roi
>
>
> -Original Message-
> From: Roi Apelker
> Sent: Thursday, August 24, 2017 7:03 PM
> To: dev@geode.apache.org
> Subject: eviction files
>
> Hi,
>
> I am looking into the internals of the eviction process,
>
> Can anyone point me to the most important classes, the main mechanism
> "wheels" etc.?
>
> Thanks,
>
> Roi
>
> -Original Message-
> From: Roi Apelker
> Sent: Wednesday, August 16, 2017 8:38 PM
> To: dev@geode.apache.org
> Subject: RE: continuous query internal mechanism questions
>
> It seems like the code in the native client (in the version I have, which
> may be old) send the message to all servers:
>
> CqResultsPtr CqQueryImpl::executeWithInitialResults(uint32_t timeout) {
>   ...
>
>   TcrMessage msg(TcrMessage::EXECUTECQ_WITH_IR_MSG_TYPE, m_cqName,
> m_queryString, CqState::RUNNING, isDurable(), m_tccdm);
>   TcrMessage reply(true, m_tccdm);
>   ChunkedQueryResponse* resultCollector = (new
> ChunkedQueryResponse(reply));
>   reply.setChunkedResultHandler(static_cast *>(resultCollector));
>   reply.setTimeout(timeout);
>
>   GfErrType err = GF_NOERR;
>   err = m_tccdm->sendSyncRequest(msg, reply); ..
>
> And sendSyncRequest:
> ...
>
> for (std::vector::iterator ep = m_endpoints.begin(); ep !=
> m_endpoints.end(); ++ep) {
> if ((*ep)->connected()) {
>   (*ep)->setDM(this);
>   opErr = sendRequestToEP(request, reply, *ep);//this will go to
> ThinClientDistributionManager
>
> ...
>
>
> Can this be causing the issue?
>
>
>
> -Roi
>
>
>
>
>
> -Original Message-
> From: Jason Huynh [mailto:jasonhu...@apache.org]
> Sent: Tuesday, August 15, 2017 9:25 PM
> To: dev@geode.apache.org
> Subject: Re: continuous query internal mechanism questions
&

Re: Query mechanism

2017-08-28 Thread Jason Huynh
DefaultQuery is where the processing starts for a query.

CompiledSelect will most likely be the first node in processing the query.

The IndexManager class will contain the list of indexes for a region as
well as the methods that help find indexes to use with a query.

Specific index classes:
CompactRangeIndex
RangeIndex
HashIndex
PrimaryKeyIndex
MapRangeIndex
CompactMapRangeIndex



On Mon, Aug 28, 2017 at 6:04 AM Roi Apelker  wrote:

>
> Hi,
>
> I am looking into the internals of how the query process works, and how
> indexes/hints affect it,
>
> Can anyone point me to the most important classes, the main mechanism
> "wheels" etc.?
>
> Thanks,
>
> Roi
>
>
> -Original Message-
> From: Roi Apelker
> Sent: Sunday, August 27, 2017 12:55 PM
> To: dev@geode.apache.org
> Subject: Indxes and hints
>
> Hi,
>
> I have a few questions regarding indexes and hints, if someone could
> confirm the below it would be great:
>
> - I have a situation where I use 3 field values in a select (something
> like select where A>1, B>1, C=true)
> - A and B are fields on the key, and C is a field on the value.
> - A and B are indexes
> - I am looking for the most efficient way to execute the query above, in
> the situation where there is overflow to eviction files, meaning some of
> the data has already been evicted to a file, which slows down the select
> considerably (this is not persistence, but overflow).
>
>
> 1. Is it true to say, that the query as it is will load all the data
> values from the file, since the field C is part of the value, which is
> already persisted to file?
> 2. If I add a hint on A and B, will it mean that there will be a "2 phase
> search", first the select on A and B, and then, only on the results, on the
> field C? (this way, not all records will be loaded from file, only those
> that suit the A and B condition) 3. Is it possible to define an index on a
> value field? (i.e. not from the key) - will it work exactly like defining
> one form the key or are three any limitations? (again, I am looking to
> overcome the situation, where as it seems, the records are loaded
> unnecessarily from disk)
>
> Thank you,
>
> Roi
>
>
> -Original Message-
> From: Roi Apelker
> Sent: Thursday, August 24, 2017 7:03 PM
> To: dev@geode.apache.org
> Subject: eviction files
>
> Hi,
>
> I am looking into the internals of the eviction process,
>
> Can anyone point me to the most important classes, the main mechanism
> "wheels" etc.?
>
> Thanks,
>
> Roi
>
> -Original Message-
> From: Roi Apelker
> Sent: Wednesday, August 16, 2017 8:38 PM
> To: dev@geode.apache.org
> Subject: RE: continuous query internal mechanism questions
>
> It seems like the code in the native client (in the version I have, which
> may be old) send the message to all servers:
>
> CqResultsPtr CqQueryImpl::executeWithInitialResults(uint32_t timeout) {
>   ...
>
>   TcrMessage msg(TcrMessage::EXECUTECQ_WITH_IR_MSG_TYPE, m_cqName,
> m_queryString, CqState::RUNNING, isDurable(), m_tccdm);
>   TcrMessage reply(true, m_tccdm);
>   ChunkedQueryResponse* resultCollector = (new
> ChunkedQueryResponse(reply));
>   reply.setChunkedResultHandler(static_cast *>(resultCollector));
>   reply.setTimeout(timeout);
>
>   GfErrType err = GF_NOERR;
>   err = m_tccdm->sendSyncRequest(msg, reply); ..
>
> And sendSyncRequest:
> ...
>
> for (std::vector::iterator ep = m_endpoints.begin(); ep !=
> m_endpoints.end(); ++ep) {
> if ((*ep)->connected()) {
>   (*ep)->setDM(this);
>   opErr = sendRequestToEP(request, reply, *ep);//this will go to
> ThinClientDistributionManager
>
> ...
>
>
> Can this be causing the issue?
>
>
>
> -Roi
>
>
>
>
>
> -Original Message-
> From: Jason Huynh [mailto:jasonhu...@apache.org]
> Sent: Tuesday, August 15, 2017 9:25 PM
> To: dev@geode.apache.org
> Subject: Re: continuous query internal mechanism questions
>
> I am not quite sure how native client registers cqs. From my understanding:
> with the java api, I believe there is only one message (ExecuteCQ message)
> that is executed on the server side and then replicated to the other nodes
> through the profile (OperationMessage).
>
> It seems the extra ExecuteCQ message failing and then closing the cq might
> be putting the system in a weird state...
>
> On Tue, Aug 15, 2017 at 7:56 AM Roi Apelker 
> wrote:
>
> > Hi,
> >
> > I have been examining the continuous query registration mechanism for
> > quite some time This is related to an issue that I have, where
> > sometimes a node cr

Re: Indxes and hints

2017-08-30 Thread Jason Huynh
You will probably have to step through debugger for this one.. it really
depends on the query.  For this query, I expect the query engine to pick
one index and run the rest of the criteria on the results of the first
index used.  My guess is you have created a CompactRangeIndex, and if so,
you can see in CompactRangeIndex.java around line 811:

if (ok && runtimeItr != null && iterOps != null) {

  ok = QueryUtils.applyCondition(iterOps, context);

}
This is where it would apply the older conditions (B and C or A and C
depending on which index was selected)
The query engine was modified to try to only use one index if it can.

The load from disk (again assuming CompactRangeIndex) is probably occurring
in MemoryIndexStore.getTargetObject.

On Wed, Aug 30, 2017 at 9:21 AM Roi Apelker  wrote:

> One more question:
>
> As I am trying to create a situation where the disk is accessed as least
> as possible
> (with a select distinct from X where a=1 and b>10 and c=true;
> In which a and b are indexes and c is not, and c is in the value which is
> evicted to disk)
>
> Did I get it right - that if I use a hint on a, or a hint on b, or a hint
> on both, it will first do a select on the hinted, and ONLY THEN the others?
>
> Can anyone refer me to the code (where the 2 phase search occurs)?
>
> Where is the value finally loaded from disk?
>
> Thank you
>
> Roi
> -Original Message-
> From: Roi Apelker
> Sent: Tuesday, August 29, 2017 4:02 PM
> To: dev@geode.apache.org
> Subject: RE: Indxes and hints
>
> Thank you Jason :-)
>
> -Original Message-
> From: Jason Huynh [mailto:jhu...@pivotal.io]
> Sent: Monday, August 28, 2017 7:24 PM
> To: dev@geode.apache.org
> Subject: Re: Indxes and hints
>
> Hi Roi,
>
> Answers are below the questions...
>
> Question 1. Is it true to say, that the query as it is will load all the
> data values from the file, since the field C is part of the value, which is
> already persisted to file?
>
> Depending on if an index is used or not, if an index is used, the values
> that are part of the results will need to be loaded to actually return a
> result.  If an index is not used, then the all the values would need to be
> loaded to actually have something to evaluate the filter criteria on.
>
> Question 2. If I add a hint on A and B, will it mean that there will be a
> "2 phase search", first the select on A and B, and then, only on the
> results, on the field C? (this way, not all records will be loaded from
> file, only those that suit the A and B condition)
>
> Depending on the  query, it could use one, or more.  If it's a query with
> only AND clauses, it should just choose one and then evaluate the other
> filters on the subset that is returned from the index.
>
> Question 3. Is it possible to define an index on a value field? (i.e. not
> from the key) - will it work exactly like defining one form the key or are
> three any limitations? (again, I am looking to overcome the situation,
> where as it seems, the records are loaded unnecessarily from disk)
>
> Yes, indexes can be defined on fields in the value.  It will work the same.
>
>
> If you are sure you are already using an index in the query and still
> loading every value for every execution of that query, there may be
> something weird going on...
>
> On Sun, Aug 27, 2017 at 2:55 AM Roi Apelker 
> wrote:
> This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
>
> you may review at https://www.amdocs.com/about/email-disclaimer <
> https://www.amdocs.com/about/email-disclaimer>
> This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
>
> you may review at https://www.amdocs.com/about/email-disclaimer <
> https://www.amdocs.com/about/email-disclaimer>
>


Re: Indxes and hints

2017-09-05 Thread Jason Huynh
Not exactly sure the reasoning to use only one index only, other than it
was a performance choice at the time.

Are you sure you are seeing both indexes being used?  In Geode, with the
following query,select * from /region p where p.ID > 0 AND p.status = 'on'
ORDER BY ID,  I only see one of the indexes being used.  I haven't seen a
second stage query in GEODE but maybe my query is not correct.

On Sun, Sep 3, 2017 at 9:17 AM Roi Apelker  wrote:

> Thank you,
>
> Can you explain why " The query engine was modified to try to only use one
> index if it can."?
>
> I also noticed that even if I query on A and B, and ORDER BY B - it seemed
> to perform the query on B only, at least in a separate stage. Why is that?
>
>
>
>
> -Roi
>
> -Original Message-
> From: Jason Huynh [mailto:jhu...@pivotal.io]
> Sent: Wednesday, August 30, 2017 10:15 PM
> To: dev@geode.apache.org
> Subject: Re: Indxes and hints
>
> You will probably have to step through debugger for this one.. it really
> depends on the query.  For this query, I expect the query engine to pick
> one index and run the rest of the criteria on the results of the first
> index used.  My guess is you have created a CompactRangeIndex, and if so,
> you can see in CompactRangeIndex.java around line 811:
>
> if (ok && runtimeItr != null && iterOps != null) {
>
>   ok = QueryUtils.applyCondition(iterOps, context);
>
> }
> This is where it would apply the older conditions (B and C or A and C
> depending on which index was selected) The query engine was modified to try
> to only use one index if it can.
>
> The load from disk (again assuming CompactRangeIndex) is probably
> occurring in MemoryIndexStore.getTargetObject.
>
> On Wed, Aug 30, 2017 at 9:21 AM Roi Apelker 
> wrote:
>
> > One more question:
> >
> > As I am trying to create a situation where the disk is accessed as
> > least as possible (with a select distinct from X where a=1 and b>10
> > and c=true; In which a and b are indexes and c is not, and c is in the
> > value which is evicted to disk)
> >
> > Did I get it right - that if I use a hint on a, or a hint on b, or a
> > hint on both, it will first do a select on the hinted, and ONLY THEN the
> others?
> >
> > Can anyone refer me to the code (where the 2 phase search occurs)?
> >
> > Where is the value finally loaded from disk?
> >
> > Thank you
> >
> > Roi
> > -Original Message-
> > From: Roi Apelker
> > Sent: Tuesday, August 29, 2017 4:02 PM
> > To: dev@geode.apache.org
> > Subject: RE: Indxes and hints
> >
> > Thank you Jason :-)
> >
> > -Original Message-
> > From: Jason Huynh [mailto:jhu...@pivotal.io]
> > Sent: Monday, August 28, 2017 7:24 PM
> > To: dev@geode.apache.org
> > Subject: Re: Indxes and hints
> >
> > Hi Roi,
> >
> > Answers are below the questions...
> >
> > Question 1. Is it true to say, that the query as it is will load all
> > the data values from the file, since the field C is part of the value,
> > which is already persisted to file?
> >
> > Depending on if an index is used or not, if an index is used, the
> > values that are part of the results will need to be loaded to actually
> > return a result.  If an index is not used, then the all the values
> > would need to be loaded to actually have something to evaluate the
> filter criteria on.
> >
> > Question 2. If I add a hint on A and B, will it mean that there will
> > be a
> > "2 phase search", first the select on A and B, and then, only on the
> > results, on the field C? (this way, not all records will be loaded
> > from file, only those that suit the A and B condition)
> >
> > Depending on the  query, it could use one, or more.  If it's a query
> > with only AND clauses, it should just choose one and then evaluate the
> > other filters on the subset that is returned from the index.
> >
> > Question 3. Is it possible to define an index on a value field? (i.e.
> > not from the key) - will it work exactly like defining one form the
> > key or are three any limitations? (again, I am looking to overcome the
> > situation, where as it seems, the records are loaded unnecessarily
> > from disk)
> >
> > Yes, indexes can be defined on fields in the value.  It will work the
> same.
> >
> >
> > If you are sure you are already using an index in the query and still
> > loading every value for every execution of that query, there may be
> > something weird going on...
> >
> > O

Re: [DISCUSS] Addition of isValid API to Index interface

2017-09-10 Thread Jason Huynh
1.)  Does anyone know of a way to do a rollback where the put is already
reflected in the region?  If that is the desired behavior, then perhaps we
will have to live with the current (leaving the region and indexes in a bad
state, wan and other callbacks that occur after index maintenance will not
occur for the one operation but the put has made it into the region) until
someone can figure out how to roll a put back and revert the update to all
the indexes.

How should this affect putAll, if at all?

Any callbacks that occur before index update have already been called
(cache writers?). I am not sure how those should be affected by a
rollback...

2.)  So the index behavior changes if they are marked for sync/async.  In
sync the index would reject the put, but in async they would just be marked
as invalid.




On Sat, Sep 9, 2017 at 6:48 AM John Blum  wrote:

> +1 to both of Anil's points.
>
> On Fri, Sep 8, 2017 at 3:04 PM, Anilkumar Gingade 
> wrote:
>
> > Indexes are critical for querying; most of the databases doesn't allow
> > insert/update if there is any failure with index maintenance...
> >
> > As Geode OQL supports two ways (sync and async) to maintain the indexes,
> we
> > need be careful about the error handling in both cases...
> >
> > My take is:
> > 1. For synchronous index maintenance:
> > If there is any failure in updating any index (security/auth or logical
> > error) on the region; throw an exception and rollback the cache update/op
> > (index management id done under region.entry lock - we should be able to
> > revert the op). If index or cache is left in bad state, then its a bug
> that
> > needs to be addressed.
> >
> > Most of the time, If there is any logical error in index, it will be
> > detected as soon as index is created (on existing data) or when first
> > update is done to the cache.
> >
> > 2. For Asynchronous index maintenance:
> > As this is async (assuming) user has good understanding of the risk
> > involved with async, any error with index maintenance, the index should
> be
> > invalidated...
> >
> >  About the security/auth, the user permission with region read/write
> needs
> > to be applied for index updates, there should not be different permission
> > on index.
> >
> > -Anil.
> >
> >
> >
> > On Fri, Sep 8, 2017 at 2:01 PM, Nabarun Nag  wrote:
> >
> > > Hi Mike,
> > >
> > > Please do find our answers below:
> > > *Question:* What if there were multiple indices that were in flight and
> > > only the third
> > > one errors out, will they all be marked invalid?
> > >
> > > *Answer:* Only the third will be marked invalid and only the third one
> > will
> > > not be used for query execution.
> > >
> > > *Question/Statement:* If anything goes wrong with the put it should
> > > probably still throw back to
> > > the caller. Silent invalidation of the index is probably not desirable.
> > >
> > > *Answer: *
> > > In our current design this the flow of execution of a put operation:
> > > entry put into region -> update index -> other wan related executions /
> > > callbacks etc.
> > >
> > > If an exception happens while updating the index, the cache gets into a
> > bad
> > > state, and we may end up getting different results depending on the
> index
> > > we are using. As the failure happens half way in a put operation, the
> > > regions / cache are now in a bad state.
> > > --
> > > We are thinking that if index is created  over a method invocation in
> an
> > > empty region and then we do puts, but method invocation is not allowed
> as
> > > per security policies. The puts will now be successful but the index
> will
> > > be rendered invalid. Previously the puts will fail with exception and
> put
> > > the entire cache in a bad state.
> > >
> > >
> > >
> > > Regards
> > > Nabarun
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Sep 8, 2017 at 10:43 AM Michael Stolz 
> wrote:
> > >
> > > > Just to help me understand, the index is corrupted in a way beyond
> just
> > > the
> > > > field that errors out?
> > > > What if there were multiple indices that were in flight and only the
> > > third
> > > > one errors out, will they all be marked invalid?
> > > > If anything goes wrong with the put it should probably still throw
> back
> > > to
> > > > the caller. Silent invalidation of the index is probably not
> desirable.
> > > >
> > > > --
> > > > Mike Stolz
> > > > Principal Engineer, GemFire Product Manager
> > > > Mobile: +1-631-835-4771 <(631)%20835-4771> <(631)%20835-4771>
> > > >
> > > > On Fri, Sep 8, 2017 at 12:34 PM, Dan Smith 
> wrote:
> > > >
> > > > > +1
> > > > >
> > > > > -Dan
> > > > >
> > > > > On Thu, Sep 7, 2017 at 9:14 PM, Nabarun Nag 
> wrote:
> > > > >
> > > > > > *Proposal:*
> > > > > > * Index interface will include an API - isValid() which will
> return
> > > > true
> > > > > if
> > > > > > the index is still valid / uncorrupted, else will return false if
> > it
> > > > > > corrupted / invalid.
> > > > > > * gfsh command "list index" will 

Re: [DISCUSS] Addition of isValid API to Index interface

2017-09-11 Thread Jason Huynh
Hi Mike, I think the concern was less about the security portion but rather
if any exception occurs during index update, right now, the region gets
updated and the rest of the system (index/wan/callbacks) may or may not be
updated.  I think Naba just tried to provide an example where this might
occur, but that specific scenario is invalid.

I believe Nabarun has opened a ticket for rolling back the put operation
when an index exception occurs. GEODE-3589.  It can probably be modified to
state any exception instead of index exceptions.

To summarize my understanding:
-Someone will need to implement the rollback for GEODE-3589.  This means
that if any exception occurs during a put, geode it will propagate back to
the user and it is expected the rollback mechanism will clean up any
partial put.

GEODE-3520 should be modified to:
-Add the isValid() api to index interface
-Mark an index as invalid during async index updates but not for
synchronous index updates.  The synchronous index updates will rely on a
rollback mechanism




On Mon, Sep 11, 2017 at 1:23 PM Michael Stolz  wrote:

> I think there was an intention of having CREATION of an index require a
> higher privilege than DATA:WRITE, but it shouldn't affect applying the
> index on either of put or get operations.
>
> If we are requiring something like CLUSTER:MANAGE for put on an indexed
> region, that is an incorrect requirement. Only DATA:WRITE should be
> required to put an entry and have it be indexed if an index is present.
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771 <(631)%20835-4771>
>
> On Fri, Sep 8, 2017 at 6:04 PM, Anilkumar Gingade 
> wrote:
>
> > Indexes are critical for querying; most of the databases doesn't allow
> > insert/update if there is any failure with index maintenance...
> >
> > As Geode OQL supports two ways (sync and async) to maintain the indexes,
> we
> > need be careful about the error handling in both cases...
> >
> > My take is:
> > 1. For synchronous index maintenance:
> > If there is any failure in updating any index (security/auth or logical
> > error) on the region; throw an exception and rollback the cache update/op
> > (index management id done under region.entry lock - we should be able to
> > revert the op). If index or cache is left in bad state, then its a bug
> that
> > needs to be addressed.
> >
> > Most of the time, If there is any logical error in index, it will be
> > detected as soon as index is created (on existing data) or when first
> > update is done to the cache.
> >
> > 2. For Asynchronous index maintenance:
> > As this is async (assuming) user has good understanding of the risk
> > involved with async, any error with index maintenance, the index should
> be
> > invalidated...
> >
> >  About the security/auth, the user permission with region read/write
> needs
> > to be applied for index updates, there should not be different permission
> > on index.
> >
> > -Anil.
> >
> >
> >
> > On Fri, Sep 8, 2017 at 2:01 PM, Nabarun Nag  wrote:
> >
> > > Hi Mike,
> > >
> > > Please do find our answers below:
> > > *Question:* What if there were multiple indices that were in flight and
> > > only the third
> > > one errors out, will they all be marked invalid?
> > >
> > > *Answer:* Only the third will be marked invalid and only the third one
> > will
> > > not be used for query execution.
> > >
> > > *Question/Statement:* If anything goes wrong with the put it should
> > > probably still throw back to
> > > the caller. Silent invalidation of the index is probably not desirable.
> > >
> > > *Answer: *
> > > In our current design this the flow of execution of a put operation:
> > > entry put into region -> update index -> other wan related executions /
> > > callbacks etc.
> > >
> > > If an exception happens while updating the index, the cache gets into a
> > bad
> > > state, and we may end up getting different results depending on the
> index
> > > we are using. As the failure happens half way in a put operation, the
> > > regions / cache are now in a bad state.
> > > --
> > > We are thinking that if index is created  over a method invocation in
> an
> > > empty region and then we do puts, but method invocation is not allowed
> as
> > > per security policies. The puts will now be successful but the index
> will
> > > be rendered invalid. Previously the puts will fail with exception and
> put
> > > the entire cache in a bad state.
> > >
> > >
> > >
> > > Regards
> > > Nabarun
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Sep 8, 2017 at 10:43 AM Michael Stolz 
> wrote:
> > >
> > > > Just to help me understand, the index is corrupted in a way beyond
> just
> > > the
> > > > field that errors out?
> > > > What if there were multiple indices that were in flight and only the
> > > third
> > > > one errors out, will they all be marked invalid?
> > > > If anything goes wrong with the put it should probably still throw
> back
> > > to
> > > > the caller. 

Re: [DISCUSS] Addition of isValid API to Index interface

2017-09-11 Thread Jason Huynh
Anil, we actually do have a case where the index is out of sync with the
region currently.  It's just not likely to happen but if there is an
exception from an index, the end result is that certain indexes get updated
and the region has already been updated.
However the exception is thrown back to the putter, so it becomes very
obvious something is wrong but I believe Naba has updated the ticket to
show a test that reproduces the problem...


On Mon, Sep 11, 2017 at 2:50 PM Anilkumar Gingade 
wrote:

> The other way to look at it is; what happens to a cache op; when there is
> an exception after Region.Entry is created? can it happen? In that case, do
> we stick the entry into the Cache or not? If an exception is handled, how
> is it done, can we look at using the same for Index...
>
> Also previously, once the valid index is created (verified during create or
> first put into the cache); we never had any issue where index is out of
> sync with cache...If that changes with new futures (security?) then we may
> have to change the expectation with indexing...
>
> -Anil.
>
>
>
> On Mon, Sep 11, 2017 at 2:16 PM, Anthony Baker  wrote:
>
> > I’m confused.  Once a cache update has been distributed to other members
> > it can’t be undone.  That update could have triggered myriad other
> > application behaviors.
> >
> > Anthony
> >
> > > On Sep 11, 2017, at 2:04 PM, Michael Stolz  wrote:
> > >
> > > Great, that's exactly the behavior I would expect.
> > >
> > > Thanks.
> > >
> > > --
> > > Mike Stolz
> > > Principal Engineer, GemFire Product Manager
> > > Mobile: +1-631-835-4771 <(631)%20835-4771>
> > >
> > > On Mon, Sep 11, 2017 at 4:34 PM, Jason Huynh 
> wrote:
> > >
> > >> Hi Mike, I think the concern was less about the security portion but
> > rather
> > >> if any exception occurs during index update, right now, the region
> gets
> > >> updated and the rest of the system (index/wan/callbacks) may or may
> not
> > be
> > >> updated.  I think Naba just tried to provide an example where this
> might
> > >> occur, but that specific scenario is invalid.
> > >>
> > >> I believe Nabarun has opened a ticket for rolling back the put
> operation
> > >> when an index exception occurs. GEODE-3589.  It can probably be
> > modified to
> > >> state any exception instead of index exceptions.
> > >>
> > >> To summarize my understanding:
> > >> -Someone will need to implement the rollback for GEODE-3589.  This
> means
> > >> that if any exception occurs during a put, geode it will propagate
> back
> > to
> > >> the user and it is expected the rollback mechanism will clean up any
> > >> partial put.
> > >>
> > >> GEODE-3520 should be modified to:
> > >> -Add the isValid() api to index interface
> > >> -Mark an index as invalid during async index updates but not for
> > >> synchronous index updates.  The synchronous index updates will rely
> on a
> > >> rollback mechanism
> > >>
> > >>
> > >>
> > >>
> > >> On Mon, Sep 11, 2017 at 1:23 PM Michael Stolz 
> > wrote:
> > >>
> > >>> I think there was an intention of having CREATION of an index
> require a
> > >>> higher privilege than DATA:WRITE, but it shouldn't affect applying
> the
> > >>> index on either of put or get operations.
> > >>>
> > >>> If we are requiring something like CLUSTER:MANAGE for put on an
> indexed
> > >>> region, that is an incorrect requirement. Only DATA:WRITE should be
> > >>> required to put an entry and have it be indexed if an index is
> present.
> > >>>
> > >>> --
> > >>> Mike Stolz
> > >>> Principal Engineer, GemFire Product Manager
> > >>> Mobile: +1-631-835-4771 <(631)%20835-4771> <(631)%20835-4771>
> > >>>
> > >>> On Fri, Sep 8, 2017 at 6:04 PM, Anilkumar Gingade <
> aging...@pivotal.io
> > >
> > >>> wrote:
> > >>>
> > >>>> Indexes are critical for querying; most of the databases doesn't
> allow
> > >>>> insert/update if there is any failure with index maintenance...
> > >>>>
> > >>>> As Geode OQL supports two ways (sync and async) to maintain the
> > >> indexes,
> > >>> we
> &g

Re: [DISCUSS] Clean build takes 10minutes to complete now

2017-09-11 Thread Jason Huynh
I was speaking with Jens and he is working on getting the full product
install up to the apache maven repo.  We could then use this instead of
downloading the zip files manually.  This would allow gradle to cache the
full product install in the local repo (similar to the other jars that are
being pulled down)

I think this is referring to the download of the full product install for
the Session State tests and not the Lucene tests?  I don't believe any of
the session state tests are being run as part of the build and it is the
download that is potentially taking a long time (which gets cleaned and
downloaded after every clean build)

The test themselves are marked as DistributedTest, however I don't think
our gradle test task can use these annotations to fire off specific gradle
tasks.  So it is currently lumped in with compiling geode-old-versions.

Just for reference, my build is taking 8 minutes but the downloading of the
product install is taking 30 seconds on my machine.  I don't think this
will solve the entire 10 minute build...






On Mon, Sep 11, 2017 at 11:23 AM Jacob Barrett  wrote:

> Agreed, integration tests should not be part of the build process. This is
> clearly an integration test.
>
> > On Sep 11, 2017, at 11:00 AM, Udo Kohlmeyer  wrote:
> >
> > Hi there,
> >
> > With a recent addition to the build scripts, to test lucene backwards
> compatibility, a step was added to download a previous version of GEODE.
> >
> > This is causing longer build times now, which is a real distraction. In
> cases where one would like to work on a branch, rebase that on develop and
> merge that, this step becomes a real time hog.
> >
> > I request that we remove this default behavior from a clean build until
> we have a better solution to this issue.
> >
> > I also believe that if anyone wants to add behavior like this into the
> default build, that it at least is discussed on the dev list before
> implementing this.
> >
> > --Udo
> >
>


Re: [DISCUSS] Clean build takes 10minutes to complete now

2017-09-15 Thread Jason Huynh
For the original issue, where the old version is pulling down the old
versions during compilation time, we have a pull request to use the maven
repo: https://github.com/apache/geode/pull/790

The rest of the tests added for the session state are marked as @Category
({DistributedTest.class, BackwardCompatibilityTest.class})
 geode-old-versions happens to be required at compile time.

The old versions will now (after the pull request is checked in) pull down
the zip into the local repo once.  Unzipping will still be required but
that should be a lot shorter than downloading.

Whether we should move the old versions out of the compile task is a
different issue...



On Fri, Sep 15, 2017 at 8:55 AM Kirk Lund  wrote:

> The actual tests marked with UnitTest category are actually pretty good.
> They all run in just over one minute and almost all of them use Mockito to
> isolate one class. I remember seeing one newer Lucene UnitTest that touches
> File System which should be recategorized as IntegrationTest.
>
> If we could move the pulling down of previous versions of Geode out of the
> main build+unit-test target, that would help a lot.
>
> Even prior to the pulling down of previous versions for backwards compat
> testing, the main build (without unit-test) was too slow and I think it's
> because our project is a little too complex for what Gradle is designed to
> handle.
>
> Code generation and javadocs are two of the tasks in our main build
> (without unit-test) that contributes to it taking too long.
>
> Also, the way Gradle handles junit categories is designed and coded very
> inefficiently -- if we could change their junit runner to use
> FastClasspathScanner to find all tests containing the targeted junit
> category annotation then that would speed up all of our testing targets
> immensely. Any testing target that forks JVMs runs super slow due to the
> way they handle categories (this effects IntegrationTest, DistributedTest,
> FlakyTest).
>
> On Fri, Sep 15, 2017 at 8:44 AM, Alexander Murmann 
> wrote:
>
> > I fully agree with Udo here. The main build should be for Unit tests. Our
> > "Unit Tests" are already exercising much more of the system than they
> > should. Adding unit tests that not only too much or our current code but
> > also old code is moving us in the wrong direction. Let's keep the tests,
> > but please appropriately mark them as IntegrationTest.
> >
> > On Tue, Sep 12, 2017 at 9:30 AM, Udo Kohlmeyer 
> > wrote:
> >
> > > My apologies, I might gotten the commit reason incorrect. I just know
> > that
> > > downloading the older product version every time is becoming painful.
> > > Yes, sometimes it is faster than other times, but imo, this is not
> > > something that should be part of the main build path.
> > >
> > > Backwards compat or integration testing should not be running as part
> of
> > > the main build task.
> > >
> > > --Udo
> > >
> > > On Tue, Sep 12, 2017 at 9:05 AM, Nabarun Nag  wrote:
> > >
> > > > As we are working on fixing this issue, some extra parameters may
> help
> > > the
> > > > build to get bit quicker on your machine.
> > > >
> > > > using -xjavadoc -xdoc
> > > > Eg: ./gradlew clean build -Dskip.tests=true -xjavadoc -xdocs
> > > > BUILD SUCCESSFUL
> > > > Total time: 2 mins 2.729 secs
> > > >
> > > >
> > > > Also, I think as Jason mentioned that the slow down is due to full
> > > product
> > > > download for session state tests. LuceneSearchWithRollingUpgradeDUnit
> > > > tests
> > > > were added  in July. Please do correct me if I am wrong.
> > > >
> > > > Regards
> > > > Nabarun
> > > >
> > > >
> > > > On Tue, Sep 12, 2017 at 11:47 AM Alexander Murmann <
> > amurm...@pivotal.io>
> > > > wrote:
> > > >
> > > > > Could we make it so that these tests for now are only run as part
> of
> > > > > pre-checkin till we got this ironed out and then revisit this?
> > > > >
> > > > > On Tue, Sep 12, 2017 at 8:32 AM, Bruce Schuchardt <
> > > > bschucha...@pivotal.io>
> > > > > wrote:
> > > > >
> > > > > > The geode-old-versions module was originally created to pull in
> old
> > > > > > version jar files into your gradle cache.  This happened only
> once
> > > and
> > > > > you
> > > > > > were good to go.  I don't think that part should be backed out as
> > it
> > > > has
> > > > > > minimal impact and is not affecting build time.
> > > > > >
> > > > > > The recent changes for lucene testing seem to be pulling in full
> > > > > > installations of old versions and these are deleted as part of
> the
> > > > > "clean"
> > > > > > gradle task.  That's causing them to be downloaded again each
> time
> > > you
> > > > > do a
> > > > > > clean&build.  Dan put changes in place so that the files aren't
> > > > > downloaded
> > > > > > again if you build without cleaning but clearly more needs to be
> > done
> > > > in
> > > > > > this area.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 9/11/17 11:23 AM, Jacob Barrett wrote:
> > > > > >
> > > > > >> Agreed, integration tests shou

Re: [DISCUSS] Framework for concurrency tests

2017-09-15 Thread Jason Huynh
+1 to Dan's Changes but also +1 to Galen's suggestion.  JPF looks like it
might take a bit to run all the different states even for a small
interleaving of code (maybe we can tune/configure it though).  Or we can
mark these as a different category and not run as a "UnitTest"

On Fri, Sep 15, 2017 at 2:22 PM Jacob Barrett  wrote:

> What? You don’t think Travis can run these fast?
>
> > On Sep 15, 2017, at 2:07 PM, Galen O'Sullivan 
> wrote:
> >
> > +1 This is great! I'll take a look at your PR when I get the time.
> >
> > We may want to think carefully about how often we run these tests,
> because
> > unlike regular unit tests, they will take forever to run.
> >
> > On Fri, Sep 15, 2017 at 1:42 PM, Michael William Dodge <
> mdo...@pivotal.io>
> > wrote:
> >
> >> +1 for unit tests for multithreaded code.
> >>
> >> High fives to Dan.
> >>
> >> Sarge
> >>
> >>> On 15 Sep, 2017, at 12:08, Dan Smith  wrote:
> >>>
> >>> Hi Geode devs,
> >>>
> >>> I've been messing around with an open source tool called Java
> >>> Pathfinder for writing tests of multithreaded code. Java Pathfinder is
> >>> a special JVM which among other things tries to execute your code
> >>> using all possible thread interleavings.
> >>>
> >>> I'd like to propose two things:
> >>>
> >>> 1) We introduce a framework for writing unit tests of code that is
> >>> supposed to be thread safe. This framework should let a developer
> >>> easily write a test with multiple things going on in parallel. The
> >>> framework can then take that code and try to run it with different
> >>> thread interleavings.
> >>>
> >>> Here's an example of what this could look like:
> >>>
> >>> @RunWith(ConcurrentTestRunner.class)
> >>> public class AtomicIntegerTest {
> >>>
> >>> @Test
> >>> public void parallelIncrementReturns2(ParallelExecutor executor)
> >>> throws ExecutionException, InterruptedException {
> >>>   AtomicInteger atomicInteger = new AtomicInteger();
> >>>   executor.inParallel(() -> atomicInteger.incrementAndGet());
> >>>   executor.inParallel(() -> atomicInteger.incrementAndGet());
> >>>   executor.execute();
> >>>   assertEquals(2, atomicInteger.get());
> >>> }
> >>>
> >>>
> >>> 2) We implement this framework initially using Java Pathfinder, but
> >>> allow for other methods of testing the code to be plugged in for
> >>> example just running the test in the loop. Java pathfinder is cool
> >>> because it can run the code with different interleavings but it does
> >>> have some serious limitations.
> >>>
> >>> I've put together some code for this proposal which is available in
> >>> this github PR:
> >>>
> >>> https://github.com/apache/geode/pull/787
> >>>
> >>> What do you think?
> >>>
> >>> -Dan
> >>
> >>
>


Re: [DISCUSS] Addition of isValid API to Index interface

2017-09-21 Thread Jason Huynh
into
> > > > > >> the region...It looks like the index update are happening after
> > the
> > > > > region
> > > > > >> change/update is saved. Moving the index update before that is
> not
> > > an
> > > > > easy
> > > > > >> task...
> > > > > >>
> > > > > >> For time, when there is any problem with index update, we can
> > > proceed
> > > > > with
> > > > > >> invalidating the indexes...But we really need to look at making
> > > region
> > > > > and
> > > > > >> index updates in a transactional way, silently invalidating
> > indexes
> > > > may
> > > > > >> not
> > > > > >> be acceptable...
> > > > > >>
> > > > > >> -Anil.
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On Thu, Sep 14, 2017 at 1:12 PM, Dan Smith 
> > > wrote:
> > > > > >>
> > > > > >> > I'm still going to push that we stick with Naba's original
> > > proposal.
> > > > > >> >
> > > > > >> > The current behavior is clearly broken. If one index update
> > fails,
> > > > an
> > > > > >> > exception gets thrown to the user (nice!) but it leaves the
> put
> > > in a
> > > > > >> > partially completed state - some other indexes may not have
> been
> > > > > >> updated,
> > > > > >> > WAN/AEQs may not have been notified, etc.
> > > > > >> >
> > > > > >> > We should never leave the system in this corrupted state. It
> > would
> > > > be
> > > > > >> nice
> > > > > >> > to be able to cleanly rollback the put, but we don't have that
> > > > > >> capability
> > > > > >> > especially considering that cache writers have already been
> > > invoked.
> > > > > So
> > > > > >> the
> > > > > >> > next best thing is to invalidate the index that failed to
> > update.
> > > > > >> >
> > > > > >> > Logging an error an allowing the put to succeed does match
> what
> > we
> > > > do
> > > > > >> with
> > > > > >> > CacheListeners. Exceptions from CacheListeners do not fail the
> > > put.
> > > > > >> >
> > > > > >> > -Dan
> > > > > >> >
> > > > > >> > On Mon, Sep 11, 2017 at 3:29 PM, Jason Huynh <
> jhu...@pivotal.io
> > >
> > > > > wrote:
> > > > > >> >
> > > > > >> > > Anil, we actually do have a case where the index is out of
> > sync
> > > > with
> > > > > >> the
> > > > > >> > > region currently.  It's just not likely to happen but if
> there
> > > is
> > > > an
> > > > > >> > > exception from an index, the end result is that certain
> > indexes
> > > > get
> > > > > >> > updated
> > > > > >> > > and the region has already been updated.
> > > > > >> > > However the exception is thrown back to the putter, so it
> > > becomes
> > > > > very
> > > > > >> > > obvious something is wrong but I believe Naba has updated
> the
> > > > ticket
> > > > > >> to
> > > > > >> > > show a test that reproduces the problem...
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > On Mon, Sep 11, 2017 at 2:50 PM Anilkumar Gingade <
> > > > > >> aging...@pivotal.io>
> > > > > >> > > wrote:
> > > > > >> > >
> > > > > >> > > > The other way to look at it is; what happens to a cache
> op;
> > > when
> > > > > >> there
> > > > > >> > is
> > > > > >> > > > an exception after Region.Entry is created? can it happen?
> > In
> > > > that
> > &g

[DISCUSS] Removal of "Submit an Issue" from Geode webpage

2017-09-29 Thread Jason Huynh
I'd like to remove the "Submit an Issue" button/script attached to the site.

We occasionally get JIRA tickets that come through the Apache Geode website
through the "Submit an Issue" Button  However these tickets are being
created through a script, and this sets Gregory Chase as the reporter and
also marks the ticket as an Improvement.  We may also end up
overlooking/not seeing these issues (as GEODE-3280 had been open and no one
really noticed...)

For example:
https://issues.apache.org/jira/browse/GEODE-3280
https://issues.apache.org/jira/browse/GEODE-3709
https://issues.apache.org/jira/browse/GEODE-2181


Attached is a screen shot showing the "Submit an Issue" button[image:
Screen Shot 2017-09-29 at 10.43.44 AM.png]


Re: Rebase and squash before merging PRs

2017-10-05 Thread Jason Huynh
I think we can also use "squash and merge" if wanting to squash commits
before merging.  This would allow you not to have to force push every time.

On Thu, Oct 5, 2017 at 3:15 PM Jinmei Liao  wrote:

> On the PR UI page, you can do that by pull down the the menu when you are
> ready to merge. Remember to use "Rebase and merge".
>
>
> ​
>
> Not sure if this is useful to everyone, but when I push a subsequent commit 
> to my feature branch, I always use "force push", so that it's only one commit 
> I need to rebase to develop.
>
>
> On Thu, Oct 5, 2017 at 3:00 PM, Jared Stewart  wrote:
>
>> I’ve been seeing a lot more merge commits on develop since we moved to
>> Gitbox.  Just wanted to give everyone a friendly reminder to please rebase
>> before merging to keep our git history tidy and readable.
>>
>> Thanks,
>> Jared
>
>
>
>
> --
> Cheers
>
> Jinmei
>


Permissions to edit the wiki

2017-10-27 Thread Jason Huynh
Hi,

I would like to be able to edit the wiki for Geode and I don't think I have
the correct permissions at this time.  Would someone be able to give me the
permissions to do so?  My username in confluence is huynhja.

Thanks,
-Jason


Re: Permissions to edit the wiki

2017-10-27 Thread Jason Huynh
Thanks!

On Fri, Oct 27, 2017 at 3:08 PM Dan Smith  wrote:

> You should have permissions now.
>
> -Dan
>
> On Fri, Oct 27, 2017 at 1:26 PM, Jason Huynh 
> wrote:
>
> > Hi,
> >
> > I would like to be able to edit the wiki for Geode and I don't think I
> have
> > the correct permissions at this time.  Would someone be able to give me
> the
> > permissions to do so?  My username in confluence is huynhja.
> >
> > Thanks,
> > -Jason
> >
>


Re: [Discussion] SQL+Streaming/JDBC as one of the unified interfaces

2017-11-13 Thread Jason Huynh
Hi Christian,

I don't know much about Calcite and haven't had a chance to try out your
adapter yet but it sounds like a neat idea.  Will your talk be recorded and
available after the Summit?

Also for question 1. Would you be interested to have the adapter as part of
Geode's code ecosystem?
Do you mean to create a module in Geode for this adapter?  Would it make
sense to add a Geode module to Calcite?  Were you wanting a tighter
integration (beyond an adapter) with Calcite within Geode?

-Jason

On Fri, Nov 10, 2017 at 3:49 AM Christian Tzolov  wrote:

> Hi,
>
> I've been working lately on Apache Calcite SQL/JDBC adapter for Apache
> Geode [1].
>
> Adapter's current implementation act as a plain Geode client (using the
> public API/OQL interfaces) trying to push down to Geode OQL as many
> relational expressions as it can. Relational expressions not supported by
> OQL are executed by the adapter itself.
>
> While this approach has its advantages and disadvantages, which I will try
> to address at my Geode Summit talk [2] I would like to ask two question:
>
> 1. Would you be interested to have the adapter as part of Geode's code
> ecosystem?
>
> 2. I am aware (an experienced it myself) the SQLFire story. But given that
> OQL features are expanding (aggregations are are already supported) and
> that tools like Calcite offer proper logical/physical (cost based) planer
> and SQL extensions such as SQL streaming, would it be useful to discuss
> what novel approaches for using SQL/JDBC with Geode are possible?
>
> (Julian Hyde - founder of Calcite - is in cc)
>
> Cheers,
> Christian
>
> [1] https://github.com/tzolov/calcite/tree/geode-1.3
> [2]
>
> https://springoneplatform.io/sessions/enable-sql-jdbc-access-to-apache-geode-gemfire-using-apache-calcite
>
>
> --
> Christian Tzolov  | Principle Software
> Engineer | Pivotal  | ctzo...@pivotal.io |+31610285517
> <+31%206%2010285517>
>


[DISCUSS] changes to registerInterest API

2017-11-16 Thread Jason Huynh
For GEODE-3813 : Region
registerInterest API usage of type parameters is broken



The current API to registerInterest allows a special string token
“ALL_KEYS” to be passed in as the parameter to registerInterest(T key).
This special token causes the registerInterest to behave similar to
registerInterestRegex(“.*”).  As the ticket states, if the region has been
typed to anything other than Object or String, the usage of “ALL_KEYS” as a
parameter results in a compilation error.


Proposals:

I would like to deprecate the special string “ALL_KEYS” and document a
workaround of using registerInterestRegex(“.*”) or we can add a new API
called registerInterestAllKeys()


I think we should also deprecate passing a List Object of keys into
registerInterest.  It has the same compilation restrictions as “ALL_KEYS”
when the region is key constrained/typed.  The reason why List would be
used is to allow registering multiple keys at once.  Instead, we can add a
new var arg API like registerInterest(T… keys).  This problem and solution
was also documented in the ticket by the ticket creator (Kirk Lund)



Thanks,

-Jason


Re: [DISCUSS] changes to registerInterest API

2017-11-17 Thread Jason Huynh
Hi Mike,

The current support for List leads to compilation issues if the region is
type constrained.  However I think you are suggesting instead of a var args
method, instead provide a registerInterest(List keys) method?

So far what I am hearing requested is:
deprecate current "ALL_KEYS" and List passing behavior
registerInterestAllKeys();
registerInterest(List keys) instead of a registerInterest(T... keys)

Will anyone ever actually have a List as the key itself? The current and
suggested changes would not allow it registering for a specific List object.



On Thu, Nov 16, 2017 at 6:50 PM Jacob Barrett  wrote:

> Geode Native C++ and .NET have:
>
>   virtual void registerKeys(const
> std::vector> & keys,
> bool isDurable = false,
> bool getInitialValues = false,
> bool receiveValues = true) = 0;
>
>   virtual void unregisterKeys(const
> std::vector> & keys) = 0;
>
>   virtual void *registerAllKeys*(bool isDurable = false,
>bool getInitialValues = false,
>bool receiveValues = true) = 0;
>
>   virtual void unregisterAllKeys() = 0;
>
>   virtual void registerRegex(const std::string& regex,
>  bool isDurable = false,
>  bool getInitialValues = false,
>  bool receiveValues = true) = 0;
>
>   virtual void unregisterRegex(const char* regex) = 0;
>
> I dislike special values like this so yes please make it go away!
>
> -Jake
>
>
> On Thu, Nov 16, 2017 at 5:20 PM Dan Smith  wrote:
>
> > I don't really like the regex option - it implies that your keys are all
> > strings. Will any other regular expressions work on non string objects?
> > registerInterestAllKeys() seems like a better option.
> >
> > -Dan
> >
> > On Thu, Nov 16, 2017 at 4:34 PM, Michael Stolz 
> wrote:
> >
> > > I don't like the vararg option.
> > > If i'm maintaining a list of keys i'm interested in, I want to be able
> to
> > > pass that List in.
> > > Varargs is a poor substitute. It might even cause problems of pushing
> in
> > > multiple different types. Keys must all be of one type for a given
> > Region.
> > >
> > >
> > > I'm very much in favor of deprecating the ALL_KEYS string in favor of
> > > something that is typed specially if you refer to ALL_KEYS.
> > >
> > >
> > > If that works, then we don't necessarily need the additional API
> > > registerInterestAllKeys(). But if ALL_KEYS can't be a special type to
> get
> > > over the compilation issues then we should go with the new API.
> > >
> > >
> > >
> > > --
> > > Mike Stolz
> > > Principal Engineer, GemFire Product Lead
> > > Mobile: +1-631-835-4771 <(631)%20835-4771> <(631)%20835-4771>
> > >
> > > On Thu, Nov 16, 2017 at 7:02 PM, Anilkumar Gingade <
> aging...@pivotal.io>
> > > wrote:
> > >
> > > > +1 Deprecating ALL_KEYS option; I believe this is added before we
> > > supported
> > > > regex support.
> > > >
> > > >  Doesn't seems like a new API is needed. The regex java doc clearly
> > > > specifies the effect of ".*".
> > > >
> > > > +1 for deprecating list argument; and replacing with new API.
> > > >
> > > > -Anil.
> > > >
> > > >
> > > >
> > > > On Thu, Nov 16, 2017 at 3:36 PM, Jason Huynh 
> > wrote:
> > > >
> > > > > For GEODE-3813 <https://issues.apache.org/jira/browse/GEODE-3813>:
> > > > Region
> > > > > registerInterest API usage of type parameters is broken
> > > > > <https://issues.apache.org/jira/browse/GEODE-3813>
> > > > >
> > > > >
> > > > > The current API to registerInterest allows a special string token
> > > > > “ALL_KEYS” to be passed in as the parameter to registerInterest(T
> > key).
> > > > > This special token causes the registerInterest to behave similar to
> > > > > registerInterestRegex(“.*”).  As the ticket states, if the region
> has
> > > > been
> > > > > typed to anything other than Object or String, the usage of
> > “ALL_KEYS”
> > > > as a
> > > > > parameter results in a compilation error.
> > > > >
> > > > >
> > > > > Proposals:
> > > > >
> > > > > I would like to deprecate the special string “ALL_KEYS” and
> document
> > a
> > > > > workaround of using registerInterestRegex(“.*”) or we can add a new
> > API
> > > > > called registerInterestAllKeys()
> > > > >
> > > > >
> > > > > I think we should also deprecate passing a List Object of keys into
> > > > > registerInterest.  It has the same compilation restrictions as
> > > “ALL_KEYS”
> > > > > when the region is key constrained/typed.  The reason why List
> would
> > be
> > > > > used is to allow registering multiple keys at once.  Instead, we
> can
> > > add
> > > > a
> > > > > new var arg API like registerInterest(T… keys).  This problem and
> > > > solution
> > > > > was also documented in the ticket by the ticket creator (Kirk Lund)
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > -Jason
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] changes to registerInterest API

2017-11-17 Thread Jason Huynh
Current idea is to:
- deprecate current "ALL_KEYS" and List passing behavior in
registerInterest()
- add registerInterestAllKeys();
- add registerInterest(T... keys) and registerInterest(Iterablekeys) and
not have one specifically for List or specific collections.

The Iterable version would handle any collection type by having the user
pass in the iterator for the collection.

On Fri, Nov 17, 2017 at 11:32 AM Jacob Barrett  wrote:

> I am failing to see where registerInterest(List keys) is an issue for
> the key type in the region. If our region is Region then I would
> expect registerInterest(List). If the keys are unknown or a mix
> then you should have Region and thus registerInterest(List
> I echo John's statements on VarArgs and type erasure as well as his
> argument for Iterable.
>
> Also, List does not restrict you from List indexes. The region would be
> Region> with registerInterest>().
>
> -Jake
>
>
> On Fri, Nov 17, 2017 at 10:04 AM John Blum  wrote:
>
> > Personally, I prefer the var args method (registerInterest(T... keys))
> > myself.  It is way more convenient if I only have a few keys when calling
> > this method then to have to add the keys to a List, especially for
> testing
> > purposes.
> >
> > But, I typically like to pair that with a registerInterest(Iterable
> > keys) method
> > as well.  By having a overloaded Iterable variant, then I can pass in any
> > Collection type I want (which shouldn't be restricted to just List).  It
> > also is a simple matter to convert any *Collection* (i.e. *List*, *Set*,
> > etc) to an array, which can be passed to the var args method.  By using
> > List,
> > you are implying that "order matters" since a List is a order collection
> of
> > elements.
> >
> > This ("*It might even cause problems of pushing in **multiple different
> > types.*"), regarding var args, does not even make sense. Technically,
> > List is no different.  Java's type erasure essentially equates var
> args
> > too "Object..." (or Object[]) and the List to List (or a List of
> > Objects,
> > essentially like if you just did this... List) So, while the
> > compiler ensures compile-time type-safety of generics, there is no
> generics
> > type-safety guarantees at runtime.
> >
> >
> >
> > On Fri, Nov 17, 2017 at 9:22 AM, Jason Huynh  wrote:
> >
> > > Hi Mike,
> > >
> > > The current support for List leads to compilation issues if the region
> is
> > > type constrained.  However I think you are suggesting instead of a var
> > args
> > > method, instead provide a registerInterest(List keys) method?
> > >
> > > So far what I am hearing requested is:
> > > deprecate current "ALL_KEYS" and List passing behavior
> > > registerInterestAllKeys();
> > > registerInterest(List keys) instead of a registerInterest(T... keys)
> > >
> > > Will anyone ever actually have a List as the key itself? The current
> and
> > > suggested changes would not allow it registering for a specific List
> > > object.
> > >
> > >
> > >
> > > On Thu, Nov 16, 2017 at 6:50 PM Jacob Barrett 
> > wrote:
> > >
> > > > Geode Native C++ and .NET have:
> > > >
> > > >   virtual void registerKeys(const
> > > > std::vector> & keys,
> > > > bool isDurable = false,
> > > > bool getInitialValues = false,
> > > > bool receiveValues = true) = 0;
> > > >
> > > >   virtual void unregisterKeys(const
> > > > std::vector> & keys) = 0;
> > > >
> > > >   virtual void *registerAllKeys*(bool isDurable = false,
> > > >bool getInitialValues = false,
> > > >bool receiveValues = true) = 0;
> > > >
> > > >   virtual void unregisterAllKeys() = 0;
> > > >
> > > >   virtual void registerRegex(const std::string& regex,
> > > >  bool isDurable = false,
> > > >  bool getInitialValues = false,
> > > >  bool receiveValues = true) = 0;
> > > >
> > > >   virtual void unregisterRegex(const char* regex) = 0;
> > > >
> > > > I dislike special values like this so yes please make it go away!
> > > >
> >

Re: [DISCUSS] changes to registerInterest API

2017-11-17 Thread Jason Huynh
Thanks John for the clarification!

On Fri, Nov 17, 2017 at 1:12 PM John Blum  wrote:

> This...
>
> > The Iterable version would handle any collection type by having the user
> pass
> in the iterator for the collection.
>
> Is not correct.
>
> The Collection interface itself "extends" the java.lang.Iterable
> interface (see here...
> https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html under
> "*All
> Superinterfaces*").
>
> Therefore a user can simply to this...
>
> *List* keys = ...
>
> region.registerInterest(keys); *// calls the
> Region.registerInterest(:Iterable) method.*
>
> Alternatively, this would also be allowed...
>
> *Set* keys = ...
>
> region.registerInterest(keys);
>
>
> On Fri, Nov 17, 2017 at 11:44 AM, Jason Huynh  wrote:
>
> > Current idea is to:
> > - deprecate current "ALL_KEYS" and List passing behavior in
> > registerInterest()
> > - add registerInterestAllKeys();
> > - add registerInterest(T... keys) and registerInterest(Iterablekeys)
> > and
> > not have one specifically for List or specific collections.
> >
> > The Iterable version would handle any collection type by having the user
> > pass in the iterator for the collection.
> >
> > On Fri, Nov 17, 2017 at 11:32 AM Jacob Barrett 
> > wrote:
> >
> > > I am failing to see where registerInterest(List keys) is an issue
> for
> > > the key type in the region. If our region is Region then I
> would
> > > expect registerInterest(List). If the keys are unknown or a mix
> > > then you should have Region and thus
> > registerInterest(List > >
> > > I echo John's statements on VarArgs and type erasure as well as his
> > > argument for Iterable.
> > >
> > > Also, List does not restrict you from List indexes. The region would
> > be
> > > Region> with registerInterest>().
> > >
> > > -Jake
> > >
> > >
> > > On Fri, Nov 17, 2017 at 10:04 AM John Blum  wrote:
> > >
> > > > Personally, I prefer the var args method (registerInterest(T...
> keys))
> > > > myself.  It is way more convenient if I only have a few keys when
> > calling
> > > > this method then to have to add the keys to a List, especially for
> > > testing
> > > > purposes.
> > > >
> > > > But, I typically like to pair that with a
> registerInterest(Iterable
> > > > keys) method
> > > > as well.  By having a overloaded Iterable variant, then I can pass in
> > any
> > > > Collection type I want (which shouldn't be restricted to just List).
> > It
> > > > also is a simple matter to convert any *Collection* (i.e. *List*,
> > *Set*,
> > > > etc) to an array, which can be passed to the var args method.  By
> using
> > > > List,
> > > > you are implying that "order matters" since a List is a order
> > collection
> > > of
> > > > elements.
> > > >
> > > > This ("*It might even cause problems of pushing in **multiple
> different
> > > > types.*"), regarding var args, does not even make sense. Technically,
> > > > List is no different.  Java's type erasure essentially equates var
> > > args
> > > > too "Object..." (or Object[]) and the List to List (or a List of
> > > > Objects,
> > > > essentially like if you just did this... List) So, while the
> > > > compiler ensures compile-time type-safety of generics, there is no
> > > generics
> > > > type-safety guarantees at runtime.
> > > >
> > > >
> > > >
> > > > On Fri, Nov 17, 2017 at 9:22 AM, Jason Huynh 
> > wrote:
> > > >
> > > > > Hi Mike,
> > > > >
> > > > > The current support for List leads to compilation issues if the
> > region
> > > is
> > > > > type constrained.  However I think you are suggesting instead of a
> > var
> > > > args
> > > > > method, instead provide a registerInterest(List keys) method?
> > > > >
> > > > > So far what I am hearing requested is:
> > > > > deprecate current "ALL_KEYS" and List passing behavior
> > > > > registerInterestAllKeys();
> > > > > registerInterest(List keys) instead of a registerInterest(T...
> > keys)
> > > > >
> > > > > 

[DISCUSS] FunctionAdapter incompatible serialVersionUID

2017-11-27 Thread Jason Huynh
This is a discussion for the fix to GEODE-4008:
InvalidClassException when deserializing FunctionAdapter from pre Geode
clients

There was a change to deprecate FunctionAdapter in Geode (before 1.0), and
this also removed the method signatures in the class. This caused Java to
generate a new serialVersionUID to the class because one was not assigned
previously. However we have clients pre Geode that when they attempt to
execute a function by serializing the function across (not using a function
id), the FunctionAdapter class is unable to deserialize properly.

The proposed fix is to assign a serialVersionUID to the class that matches
that of the pre Geode FunctionAdapter. This will cause any Geode 1.0-1.3
clients to now run into the error but the older clients would work fine.
Because FunctionAdapter has been deprecated it should be easy enough for
Geode 1.0-1.3 users to change their custom classes to implement Function
directly and not use the deprecated FunctionAdapter class.

Please let me know if there is a better solution or if there are problems
with the proposed fix.


Thanks,

-Jason


Re: [DISCUSS] FunctionAdapter incompatible serialVersionUID

2017-11-28 Thread Jason Huynh
Dan, yeah, the suggested change in the stack overflow answer does work and
I was able to put an if with the exact serialVersionUid before posting the
proposal, but it is pretty hacky and may affect another class that somehow
generated the same uid.  I can make that change too but I'd prefer not to
have to maintain it moving forward...



On Tue, Nov 28, 2017 at 5:09 PM Dan Smith  wrote:

> I agree I don't think we can get rid of FunctionAdapter until the next
> major version.
>
> I was thinking FunctionAdapter is rather widely used, but then I'm
> surprised no one has hit this yet.
>
> All of the options kinda suck here - either pre 1.0 users have a
> compatibility issue or 1.0-1.3 users do. With your proposoal 1.0 - 1.3
> users would have modify their source code on the client and the server for
> the function, correct?
>
> If we got really fancy we could actually ignore the serialVersionUUID for
> this class like this - https://stackoverflow.com/a/1816711/2813144. But
> it's pretty messy.
>
> -Dan
>
> On Tue, Nov 28, 2017 at 1:59 PM, Alexander Murmann 
> wrote:
>
> > Anil, I am not sure following. I think FunctionAdapter already is
> > deprecated. Isn't it? Anthony is right though that we shouldn't remove
> > anything customer facing unless we are doing a major release. Otherwise
> we
> > are violating the contract provided by semantic versioning.
> >
> > On Tue, Nov 28, 2017 at 1:52 PM, Anilkumar Gingade 
> > wrote:
> >
> > > I haven't seen many uses of FunctionAdapter; if its not used much, I
> > think
> > > we should deprecate this...
> > >
> > > It only provided default implementation for few of the methods; this
> > could
> > > be added in the docs/release notes to help application to move to
> > function
> > > implementation.
> > >
> > > -Anil.
> > >
> > >
> > > On Tue, Nov 28, 2017 at 12:40 PM, Anthony Baker 
> > wrote:
> > >
> > > > I think we should wait for a major release to remove API’s.  If we
> > broke
> > > a
> > > > public API, we should fix that IMO.
> > > >
> > > > Anthony
> > > >
> > > >
> > > > > On Nov 28, 2017, at 11:40 AM, Patrick Rhomberg <
> prhomb...@pivotal.io
> > >
> > > > wrote:
> > > > >
> > > > > +1 to removing a long-deprecated class from the Geode side.
> > > > >
> > > > > On Tue, Nov 28, 2017 at 8:04 AM, Bruce Schuchardt <
> > > > bschucha...@pivotal.io>
> > > > > wrote:
> > > > >
> > > > >> How about just getting rid of this class?  After all it was marked
> > as
> > > > >> being deprecated in 1.0.  Pivotal could add a compatible
> > > FunctionAdapter
> > > > >> class in their GemFire builds to support these old clients.
> > > > >>
> > > > >>
> > > > >>
> > > > >> On 11/27/17 10:18 AM, Jason Huynh wrote:
> > > > >>
> > > > >>> This is a discussion for the fix to GEODE-4008:
> > > > >>> InvalidClassException when deserializing FunctionAdapter from pre
> > > Geode
> > > > >>> clients
> > > > >>>
> > > > >>> There was a change to deprecate FunctionAdapter in Geode (before
> > > 1.0),
> > > > and
> > > > >>> this also removed the method signatures in the class. This caused
> > > Java
> > > > to
> > > > >>> generate a new serialVersionUID to the class because one was not
> > > > assigned
> > > > >>> previously. However we have clients pre Geode that when they
> > attempt
> > > to
> > > > >>> execute a function by serializing the function across (not using
> a
> > > > >>> function
> > > > >>> id), the FunctionAdapter class is unable to deserialize properly.
> > > > >>>
> > > > >>> The proposed fix is to assign a serialVersionUID to the class
> that
> > > > matches
> > > > >>> that of the pre Geode FunctionAdapter. This will cause any Geode
> > > > 1.0-1.3
> > > > >>> clients to now run into the error but the older clients would
> work
> > > > fine.
> > > > >>> Because FunctionAdapter has been deprecated it should be easy
> > enough
> > > > for
> > > > >>> Geode 1.0-1.3 users to change their custom classes to implement
> > > > Function
> > > > >>> directly and not use the deprecated FunctionAdapter class.
> > > > >>>
> > > > >>> Please let me know if there is a better solution or if there are
> > > > problems
> > > > >>> with the proposed fix.
> > > > >>>
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> -Jason
> > > > >>>
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
>


Re: [DISCUSS] FunctionAdapter incompatible serialVersionUID

2017-11-28 Thread Jason Huynh
*With your proposoal 1.0 - 1.3 users would have modify their source code on
the client and the server forthe function, correct?*

If they start a new geode server 1.4+ and happened to extend
functionAdapter (it was deprecated in 1.0) then they would have to
recompile their client to not use functionAdapter.

This should only affect users that extend FunctionAdapter and execute
functions by serializing them to the server from the client.  If they
execute by id it should not run into this problem...

On Tue, Nov 28, 2017 at 5:20 PM Jason Huynh  wrote:

> Dan, yeah, the suggested change in the stack overflow answer does work and
> I was able to put an if with the exact serialVersionUid before posting the
> proposal, but it is pretty hacky and may affect another class that somehow
> generated the same uid.  I can make that change too but I'd prefer not to
> have to maintain it moving forward...
>
>
>
> On Tue, Nov 28, 2017 at 5:09 PM Dan Smith  wrote:
>
>> I agree I don't think we can get rid of FunctionAdapter until the next
>> major version.
>>
>> I was thinking FunctionAdapter is rather widely used, but then I'm
>> surprised no one has hit this yet.
>>
>> All of the options kinda suck here - either pre 1.0 users have a
>> compatibility issue or 1.0-1.3 users do. With your proposoal 1.0 - 1.3
>> users would have modify their source code on the client and the server for
>> the function, correct?
>>
>> If we got really fancy we could actually ignore the serialVersionUUID for
>> this class like this - https://stackoverflow.com/a/1816711/2813144. But
>> it's pretty messy.
>>
>> -Dan
>>
>> On Tue, Nov 28, 2017 at 1:59 PM, Alexander Murmann 
>> wrote:
>>
>> > Anil, I am not sure following. I think FunctionAdapter already is
>> > deprecated. Isn't it? Anthony is right though that we shouldn't remove
>> > anything customer facing unless we are doing a major release. Otherwise
>> we
>> > are violating the contract provided by semantic versioning.
>> >
>> > On Tue, Nov 28, 2017 at 1:52 PM, Anilkumar Gingade > >
>> > wrote:
>> >
>> > > I haven't seen many uses of FunctionAdapter; if its not used much, I
>> > think
>> > > we should deprecate this...
>> > >
>> > > It only provided default implementation for few of the methods; this
>> > could
>> > > be added in the docs/release notes to help application to move to
>> > function
>> > > implementation.
>> > >
>> > > -Anil.
>> > >
>> > >
>> > > On Tue, Nov 28, 2017 at 12:40 PM, Anthony Baker 
>> > wrote:
>> > >
>> > > > I think we should wait for a major release to remove API’s.  If we
>> > broke
>> > > a
>> > > > public API, we should fix that IMO.
>> > > >
>> > > > Anthony
>> > > >
>> > > >
>> > > > > On Nov 28, 2017, at 11:40 AM, Patrick Rhomberg <
>> prhomb...@pivotal.io
>> > >
>> > > > wrote:
>> > > > >
>> > > > > +1 to removing a long-deprecated class from the Geode side.
>> > > > >
>> > > > > On Tue, Nov 28, 2017 at 8:04 AM, Bruce Schuchardt <
>> > > > bschucha...@pivotal.io>
>> > > > > wrote:
>> > > > >
>> > > > >> How about just getting rid of this class?  After all it was
>> marked
>> > as
>> > > > >> being deprecated in 1.0.  Pivotal could add a compatible
>> > > FunctionAdapter
>> > > > >> class in their GemFire builds to support these old clients.
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> On 11/27/17 10:18 AM, Jason Huynh wrote:
>> > > > >>
>> > > > >>> This is a discussion for the fix to GEODE-4008:
>> > > > >>> InvalidClassException when deserializing FunctionAdapter from
>> pre
>> > > Geode
>> > > > >>> clients
>> > > > >>>
>> > > > >>> There was a change to deprecate FunctionAdapter in Geode (before
>> > > 1.0),
>> > > > and
>> > > > >>> this also removed the method signatures in the class. This
>> caused
>> > > Java
>> > > > to
>> > > > >>> generate a new serialVersionUID to the class because one was not
>> > > > assigned
>> > > > >>> previously. However we have clients pre Geode that when they
>> > attempt
>> > > to
>> > > > >>> execute a function by serializing the function across (not
>> using a
>> > > > >>> function
>> > > > >>> id), the FunctionAdapter class is unable to deserialize
>> properly.
>> > > > >>>
>> > > > >>> The proposed fix is to assign a serialVersionUID to the class
>> that
>> > > > matches
>> > > > >>> that of the pre Geode FunctionAdapter. This will cause any Geode
>> > > > 1.0-1.3
>> > > > >>> clients to now run into the error but the older clients would
>> work
>> > > > fine.
>> > > > >>> Because FunctionAdapter has been deprecated it should be easy
>> > enough
>> > > > for
>> > > > >>> Geode 1.0-1.3 users to change their custom classes to implement
>> > > > Function
>> > > > >>> directly and not use the deprecated FunctionAdapter class.
>> > > > >>>
>> > > > >>> Please let me know if there is a better solution or if there are
>> > > > problems
>> > > > >>> with the proposed fix.
>> > > > >>>
>> > > > >>>
>> > > > >>> Thanks,
>> > > > >>>
>> > > > >>> -Jason
>> > > > >>>
>> > > > >>>
>> > > > >>
>> > > >
>> > > >
>> > >
>> >
>>
>


Re: [DISCUSS] changes to registerInterest API

2017-11-30 Thread Jason Huynh
I started work on the following plan:
- deprecate current "ALL_KEYS" and List passing behavior in registerInterest
()
- add registerInterestForAllKeys();
- add registerInterest(T... keys)
- add registerInterest(Iterablekeys)

I might be missing something here but:
With the addition of registerInterest(Iterable keys), I think we would
not be able to register interest a List as the key itself.  A list would be
iterated over due to the addition of registerInterest(Iterable keys).  A
list in a list would be passed into registerInterest and again be iterated
over.  I could change the newly created registerInterest call and
explicitly name it something else or are we ok with Iterables not being
able to be registered as individual keys.





On Mon, Nov 20, 2017 at 9:05 AM Kirk Lund  wrote:

> John's approach looks best for when you need to specify keys.
>
> For ALL_KEYS, what about an API that doesn't require a token or all keys:
>
> public void registerInterestForAllKeys();
>
> On Fri, Nov 17, 2017 at 1:24 PM, Jason Huynh  wrote:
>
> > Thanks John for the clarification!
> >
> > On Fri, Nov 17, 2017 at 1:12 PM John Blum  wrote:
> >
> > > This...
> > >
> > > > The Iterable version would handle any collection type by having the
> > user
> > > pass
> > > in the iterator for the collection.
> > >
> > > Is not correct.
> > >
> > > The Collection interface itself "extends" the java.lang.Iterable
> > > interface (see here...
> > > https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html
> > under
> > > "*All
> > > Superinterfaces*").
> > >
> > > Therefore a user can simply to this...
> > >
> > > *List* keys = ...
> > >
> > > region.registerInterest(keys); *// calls the
> > > Region.registerInterest(:Iterable) method.*
> > >
> > > Alternatively, this would also be allowed...
> > >
> > > *Set* keys = ...
> > >
> > > region.registerInterest(keys);
> > >
> > >
> > > On Fri, Nov 17, 2017 at 11:44 AM, Jason Huynh 
> wrote:
> > >
> > > > Current idea is to:
> > > > - deprecate current "ALL_KEYS" and List passing behavior in
> > > > registerInterest()
> > > > - add registerInterestAllKeys();
> > > > - add registerInterest(T... keys) and registerInterest(Iterable
> > keys)
> > > > and
> > > > not have one specifically for List or specific collections.
> > > >
> > > > The Iterable version would handle any collection type by having the
> > user
> > > > pass in the iterator for the collection.
> > > >
> > > > On Fri, Nov 17, 2017 at 11:32 AM Jacob Barrett 
> > > > wrote:
> > > >
> > > > > I am failing to see where registerInterest(List keys) is an
> issue
> > > for
> > > > > the key type in the region. If our region is Region then I
> > > would
> > > > > expect registerInterest(List). If the keys are unknown or a
> > mix
> > > > > then you should have Region and thus
> > > > registerInterest(List > > > >
> > > > > I echo John's statements on VarArgs and type erasure as well as his
> > > > > argument for Iterable.
> > > > >
> > > > > Also, List does not restrict you from List indexes. The region
> > would
> > > > be
> > > > > Region> with registerInterest>().
> > > > >
> > > > > -Jake
> > > > >
> > > > >
> > > > > On Fri, Nov 17, 2017 at 10:04 AM John Blum 
> wrote:
> > > > >
> > > > > > Personally, I prefer the var args method (registerInterest(T...
> > > keys))
> > > > > > myself.  It is way more convenient if I only have a few keys when
> > > > calling
> > > > > > this method then to have to add the keys to a List, especially
> for
> > > > > testing
> > > > > > purposes.
> > > > > >
> > > > > > But, I typically like to pair that with a
> > > registerInterest(Iterable
> > > > > > keys) method
> > > > > > as well.  By having a overloaded Iterable variant, then I can
> pass
> > in
> > > > any
> > > > > > Collection type I want (which shouldn't be restricted to just
> > List).
> > > > It
> > > > 

Re: [DISCUSS] changes to registerInterest API

2017-11-30 Thread Jason Huynh
Yeah I am not sure if anyone does it,and I don't think it would be a good
idea to use a collection as a key but just thought I'd ask the question...

On Thu, Nov 30, 2017 at 2:33 PM John Blum  wrote:

> For instance, this came up recently...
>
>
> https://stackoverflow.com/questions/46551278/gemfire-composite-key-pojo-as-gemfire-key
>
> I have seen other similar posts too!
>
>
>
> On Thu, Nov 30, 2017 at 2:30 PM, John Blum  wrote:
>
> > Does anyone actually do this in practice?  If so, yikes!
> >
> > Even if the List is immutable, the elements may not be, so using a List
> as
> > a key starts to open 1 up to a lot of problems.
> >
> > As others have pointed out in SO and other channels, information should
> > not be kept in the key.
> >
> > It is perfect fine to have a "Composite" Key, but then define a
> > CompositeKey class type with properly implemented equals(:Object) and
> > hashCode():int methods.
> >
> > For the most part, Keys should really only ever be simple Scalar values
> > (e.g. Long, String, etc).
> >
> > -j
> >
> >
> >
> >
> > On Thu, Nov 30, 2017 at 2:25 PM, Jason Huynh  wrote:
> >
> >> I started work on the following plan:
> >> - deprecate current "ALL_KEYS" and List passing behavior in
> >> registerInterest
> >> ()
> >> - add registerInterestForAllKeys();
> >> - add registerInterest(T... keys)
> >> - add registerInterest(Iterablekeys)
> >>
> >> I might be missing something here but:
> >> With the addition of registerInterest(Iterable keys), I think we
> would
> >> not be able to register interest a List as the key itself.  A list would
> >> be
> >> iterated over due to the addition of registerInterest(Iterable keys).
> >> A
> >> list in a list would be passed into registerInterest and again be
> iterated
> >> over.  I could change the newly created registerInterest call and
> >> explicitly name it something else or are we ok with Iterables not being
> >> able to be registered as individual keys.
> >>
> >>
> >>
> >>
> >>
> >> On Mon, Nov 20, 2017 at 9:05 AM Kirk Lund  wrote:
> >>
> >> > John's approach looks best for when you need to specify keys.
> >> >
> >> > For ALL_KEYS, what about an API that doesn't require a token or all
> >> keys:
> >> >
> >> > public void registerInterestForAllKeys();
> >> >
> >> > On Fri, Nov 17, 2017 at 1:24 PM, Jason Huynh 
> wrote:
> >> >
> >> > > Thanks John for the clarification!
> >> > >
> >> > > On Fri, Nov 17, 2017 at 1:12 PM John Blum  wrote:
> >> > >
> >> > > > This...
> >> > > >
> >> > > > > The Iterable version would handle any collection type by having
> >> the
> >> > > user
> >> > > > pass
> >> > > > in the iterator for the collection.
> >> > > >
> >> > > > Is not correct.
> >> > > >
> >> > > > The Collection interface itself "extends" the
> >> java.lang.Iterable
> >> > > > interface (see here...
> >> > > >
> https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html
> >> > > under
> >> > > > "*All
> >> > > > Superinterfaces*").
> >> > > >
> >> > > > Therefore a user can simply to this...
> >> > > >
> >> > > > *List* keys = ...
> >> > > >
> >> > > > region.registerInterest(keys); *// calls the
> >> > > > Region.registerInterest(:Iterable) method.*
> >> > > >
> >> > > > Alternatively, this would also be allowed...
> >> > > >
> >> > > > *Set* keys = ...
> >> > > >
> >> > > > region.registerInterest(keys);
> >> > > >
> >> > > >
> >> > > > On Fri, Nov 17, 2017 at 11:44 AM, Jason Huynh 
> >> > wrote:
> >> > > >
> >> > > > > Current idea is to:
> >> > > > > - deprecate current "ALL_KEYS" and List passing behavior in
> >> > > > > registerInterest()
> >> > > > > - add registerInterestAllKeys();
> >> > > > > - add register

Apache geode nightly failures

2017-12-01 Thread Jason Huynh
Has anyone been getting emails for the nightly builds and failures (My last
build email was 1027 and I think the latest is 1030)?  It looks like the
last few have been a mess with Out of Memory exceptions and I think they
were all ran on H34.  Should that machine be black listed?  If so, would
someone with permissions be able to do so?


Re: [DISCUSS] FunctionAdapter incompatible serialVersionUID

2017-12-05 Thread Jason Huynh
I've sent in a pull request for this:
https://github.com/apache/geode/pull/1119

I've added the old serialVersionUID to the FunctionAdapter, under the
assumption that anyone in 1.0-1.3 would have seen that the FunctionAdapter
had been deprecated and not used it.  The 1.0-1.3 users could also easily
change "extends FunctionAdapter" to "implements Function", recompile and
things would work for them.  The users pre-1.0 would have a slightly more
difficult approach.

Anyone interested, please review/accept or reject the pull request.

Thanks,
-Jason

On Wed, Nov 29, 2017 at 9:09 AM Darrel Schneider 
wrote:

> +1 to not removing deprecated apis in minor releases.
> The semver policy Alexander describes seems reasonable.
> In the past of we have had something deprecated for a long time we have
> felt free to remove it whenever we want but I think the semver policy is a
> better way to decide when we are free to remove deprecated external apis.
>
>
> On Wed, Nov 29, 2017 at 8:50 AM, Alexander Murmann 
> wrote:
>
> > Even though the class is deprecated, you should be able to go from one
> > minor version to another without having to worry about anything breaking.
> > The point of semver is to provide information about things breaking or
> not
> > without having to read the changelog. If we remove APIs in a minor
> version
> > because they were previously deprecated we break that contract.
> Semver.org
> > <https://semver.org/#how-should-i-handle-deprecating-functionality>
> > recommends
> > that the functionality should be marked as deprecated in a minor release
> > and then removed as part of a major release.
> >
> > On Tue, Nov 28, 2017 at 7:03 PM, Jacob Barrett 
> > wrote:
> >
> > > Since the class was deprecated and is technically only there for
> pre-1.0
> > > compatibly then the behavior of this class should be consistent with
> the
> > > pre-1.0 version. This will break 1.0 to 1.3 but anyone coding to a
> > post-1.0
> > > version should not be using this deprecated class.
> > >
> > >
> > > > On Nov 28, 2017, at 5:22 PM, Jason Huynh  wrote:
> > > >
> > > > *With your proposoal 1.0 - 1.3 users would have modify their source
> > code
> > > on
> > > > the client and the server forthe function, correct?*
> > > >
> > > > If they start a new geode server 1.4+ and happened to extend
> > > > functionAdapter (it was deprecated in 1.0) then they would have to
> > > > recompile their client to not use functionAdapter.
> > > >
> > > > This should only affect users that extend FunctionAdapter and execute
> > > > functions by serializing them to the server from the client.  If they
> > > > execute by id it should not run into this problem...
> > > >
> > > >> On Tue, Nov 28, 2017 at 5:20 PM Jason Huynh 
> > wrote:
> > > >>
> > > >> Dan, yeah, the suggested change in the stack overflow answer does
> work
> > > and
> > > >> I was able to put an if with the exact serialVersionUid before
> posting
> > > the
> > > >> proposal, but it is pretty hacky and may affect another class that
> > > somehow
> > > >> generated the same uid.  I can make that change too but I'd prefer
> not
> > > to
> > > >> have to maintain it moving forward...
> > > >>
> > > >>
> > > >>
> > > >>> On Tue, Nov 28, 2017 at 5:09 PM Dan Smith 
> wrote:
> > > >>>
> > > >>> I agree I don't think we can get rid of FunctionAdapter until the
> > next
> > > >>> major version.
> > > >>>
> > > >>> I was thinking FunctionAdapter is rather widely used, but then I'm
> > > >>> surprised no one has hit this yet.
> > > >>>
> > > >>> All of the options kinda suck here - either pre 1.0 users have a
> > > >>> compatibility issue or 1.0-1.3 users do. With your proposoal 1.0 -
> > 1.3
> > > >>> users would have modify their source code on the client and the
> > server
> > > for
> > > >>> the function, correct?
> > > >>>
> > > >>> If we got really fancy we could actually ignore the
> serialVersionUUID
> > > for
> > > >>> this class like this - https://stackoverflow.com/a/1816711/2813144
> .
> > > But
> > > >>> it's pretty messy.
> > &

[DISCUSS] Proposal to Deprecate Hash Index

2017-12-05 Thread Jason Huynh
This is a proposal to deprecate existing Hash Index and deprecate the
create hash index apis.


Currently the Hash Index name causes confusion. It is not a traditional
hash look up index, but more of memory savings index.  The index does not
store index keys in memory and must hash the keys every time.  The index
synchronizes on a backing array and when the backing array needs to be
expanded, it currently needs to rehash all elements in the array.  This can
be very problematic for larger data sets.


There were improvements made to one of the functional indexes (compact
range index) prior to open sourcing.  These improvements helped reduce the
memory consumption of that index and makes it very similar sized to a hash
index, but the keys still are stored in memory.  Probably close enough to
be a replacement for the hash index in most cases.  The read/write
performance on it is also faster than the hash index.


If anyone has any objections, please let us know and why.


Thanks,

- Jason


Re: Next release: 1.4.0

2018-01-04 Thread Jason Huynh
Hi Swapnil,

GEODE-4140 was just marked for 1.4.  I think part of GEODE-4140 should be
fixed because the process.ClusterConfigurationNotAvailableException should
probably be reinstated.  If others don't think it's needed then feel free
to remove the fix tag.

-Jason

On Thu, Jan 4, 2018 at 4:38 PM Dan Smith  wrote:

> Our process up to this point has been to not ship until the jenkins builds
> on the release branch pass. We've been experimenting with concourse in
> parallel with jenkins, but the jenkins builds on develop at least are still
> pretty messy. How are we going to ship this release? Should both be
> passing?
>
> -Dan
>
> On Thu, Jan 4, 2018 at 4:23 PM, Swapnil Bawaskar 
> wrote:
>
> > Since all the issues tagged for 1.4.0 release
> >  > rapidView=92&projectKey=GEODE&view=planning&selectedIssue=
> > GEODE-3688&versions=visible&selectedVersion=12341842>
> > have been addressed, I went ahead and created a release branch for 1.4.0.
> >
> > Can someone please update the concourse pipelines to pick up this release
> > branch?
> >
> > Thanks!
> >
> >
> > On Tue, Nov 28, 2017 at 1:58 PM Swapnil Bawaskar 
> > wrote:
> >
> > > Well, making sure that the JIRA's status is up-to-date and removing the
> > > 1.4.0 version tag if the fix can wait for a later release.
> > >
> > > On Tue, Nov 28, 2017 at 12:22 PM Michael William Dodge <
> > mdo...@pivotal.io>
> > > wrote:
> > >
> > >> What sort of update? I know that GEODE-4010 has a PR that's awaiting
> > >> review and merge.
> > >>
> > >> Sarge
> > >>
> > >> > On 28 Nov, 2017, at 10:03, Swapnil Bawaskar 
> > >> wrote:
> > >> >
> > >> > I would like to volunteer as a release manager.
> > >> > Currently there are 14 issues that are marked for 1.4.0. If you are
> > >> working
> > >> > on any of these, can you please update the JIRA?
> > >> >
> > >> https://issues.apache.org/jira/secure/RapidBoard.jspa?
> > rapidView=92&projectKey=GEODE&view=planning&selectedIssue=
> > GEODE-3688&versions=visible&selectedVersion=12341842
> > >> >
> > >> > Thanks!
> > >> >
> > >> > On Tue, Nov 28, 2017 at 9:42 AM Anthony Baker 
> > >> wrote:
> > >> >
> > >> >> Bump.  Any volunteers?  If not, I’ll do this.
> > >> >>
> > >> >> Anthony
> > >> >>
> > >> >>
> > >> >>> On Nov 22, 2017, at 1:48 PM, Anthony Baker 
> > wrote:
> > >> >>>
> > >> >>> We released Geode 1.3.0 at the end of October.  Our next release
> > will
> > >> be
> > >> >> 1.4.0.  Questions:
> > >> >>>
> > >> >>> 1) Who wants to volunteer as a release manager?
> > >> >>> 2) What do we want to include in the release?
> > >> >>> 3) When do we want to do this?
> > >> >>>
> > >> >>> IMO, let's should shoot for an early Dec release.
> > >> >>>
> > >> >>> Anthony
> > >> >>>
> > >> >>
> > >> >>
> > >>
> > >>
> >
>


Re: Next release: 1.4.0

2018-01-05 Thread Jason Huynh
Hi Anil,

Thanks, I'll take a look at that, I added that test a little while ago..

On Fri, Jan 5, 2018 at 10:13 AM Anilkumar Gingade 
wrote:

> I see :rat failing on Jenkins...
>
> The last run: https://builds.apache.org/job/Geode-release/100/console
>
> All test reports at
> /home/jenkins/jenkins-slave/workspace/Geode-release/build/reports/combined
>
> FAILURE: Build failed with an exception.
>
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unapproved/unknown licenses. See
> file:/home/jenkins/jenkins-slave/workspace/Geode-release/build/reports/rat/rat-report.txt
>
> *** Error:
>
> Unapproved licenses:
>
> /home/jenkins/jenkins-slave/workspace/Geode-release/geode-core/src/test/resources/org/apache/geode/cache/execute/FunctionAdapterJUnitTest.serializedFunctionAdapterWithDifferentSerialVersionUID
>
>
> -Anil.
>
>
>
> On Fri, Jan 5, 2018 at 10:00 AM, Anthony Baker  wrote:
>
> > +1
> >
> > It should be pretty easy to clone the current pipeline for the 1.4.0
> > release branch.
> >
> > I’ll plan to update the Jenkins jobs to run the `build` and
> > `updateArchives` tasks since those still have value.
> >
> > Anthony
> >
> >
> > > On Jan 4, 2018, at 5:03 PM, Alexander Murmann 
> > wrote:
> > >
> > > The Concourse pipeline seems much more reliable at this point and the
> > > pipelines should be providing equivalent test coverage. Given that, are
> > > there any reasons to not deprecate Jenkins?
> > >
> > > On Thu, Jan 4, 2018 at 4:55 PM, Jason Huynh  wrote:
> > >
> > >> Hi Swapnil,
> > >>
> > >> GEODE-4140 was just marked for 1.4.  I think part of GEODE-4140 should
> > be
> > >> fixed because the process.ClusterConfigurationNotAvailableException
> > should
> > >> probably be reinstated.  If others don't think it's needed then feel
> > free
> > >> to remove the fix tag.
> > >>
> > >> -Jason
> > >>
> > >> On Thu, Jan 4, 2018 at 4:38 PM Dan Smith  wrote:
> > >>
> > >>> Our process up to this point has been to not ship until the jenkins
> > >> builds
> > >>> on the release branch pass. We've been experimenting with concourse
> in
> > >>> parallel with jenkins, but the jenkins builds on develop at least are
> > >> still
> > >>> pretty messy. How are we going to ship this release? Should both be
> > >>> passing?
> > >>>
> > >>> -Dan
> > >>>
> > >>> On Thu, Jan 4, 2018 at 4:23 PM, Swapnil Bawaskar <
> sbawas...@pivotal.io
> > >
> > >>> wrote:
> > >>>
> > >>>> Since all the issues tagged for 1.4.0 release
> > >>>> <https://issues.apache.org/jira/secure/RapidBoard.jspa?
> > >>>> rapidView=92&projectKey=GEODE&view=planning&selectedIssue=
> > >>>> GEODE-3688&versions=visible&selectedVersion=12341842>
> > >>>> have been addressed, I went ahead and created a release branch for
> > >> 1.4.0.
> > >>>>
> > >>>> Can someone please update the concourse pipelines to pick up this
> > >> release
> > >>>> branch?
> > >>>>
> > >>>> Thanks!
> > >>>>
> > >>>>
> > >>>> On Tue, Nov 28, 2017 at 1:58 PM Swapnil Bawaskar <
> > sbawas...@pivotal.io
> > >>>
> > >>>> wrote:
> > >>>>
> > >>>>> Well, making sure that the JIRA's status is up-to-date and removing
> > >> the
> > >>>>> 1.4.0 version tag if the fix can wait for a later release.
> > >>>>>
> > >>>>> On Tue, Nov 28, 2017 at 12:22 PM Michael William Dodge <
> > >>>> mdo...@pivotal.io>
> > >>>>> wrote:
> > >>>>>
> > >>>>>> What sort of update? I know that GEODE-4010 has a PR that's
> awaiting
> > >>>>>> review and merge.
> > >>>>>>
> > >>>>>> Sarge
> > >>>>>>
> > >>>>>>> On 28 Nov, 2017, at 10:03, Swapnil Bawaskar <
> sbawas...@pivotal.io
> > >>>
> > >>>>>> wrote:
> > >>>>>>>
> > >>>>>>> I would like to volunteer as a release manager.
> > >>>>>>> Currently there are 14 issues that are marked for 1.4.0. If you
> > >> are
> > >>>>>> working
> > >>>>>>> on any of these, can you please update the JIRA?
> > >>>>>>>
> > >>>>>> https://issues.apache.org/jira/secure/RapidBoard.jspa?
> > >>>> rapidView=92&projectKey=GEODE&view=planning&selectedIssue=
> > >>>> GEODE-3688&versions=visible&selectedVersion=12341842
> > >>>>>>>
> > >>>>>>> Thanks!
> > >>>>>>>
> > >>>>>>> On Tue, Nov 28, 2017 at 9:42 AM Anthony Baker  >
> > >>>>>> wrote:
> > >>>>>>>
> > >>>>>>>> Bump.  Any volunteers?  If not, I’ll do this.
> > >>>>>>>>
> > >>>>>>>> Anthony
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>> On Nov 22, 2017, at 1:48 PM, Anthony Baker 
> > >>>> wrote:
> > >>>>>>>>>
> > >>>>>>>>> We released Geode 1.3.0 at the end of October.  Our next
> release
> > >>>> will
> > >>>>>> be
> > >>>>>>>> 1.4.0.  Questions:
> > >>>>>>>>>
> > >>>>>>>>> 1) Who wants to volunteer as a release manager?
> > >>>>>>>>> 2) What do we want to include in the release?
> > >>>>>>>>> 3) When do we want to do this?
> > >>>>>>>>>
> > >>>>>>>>> IMO, let's should shoot for an early Dec release.
> > >>>>>>>>>
> > >>>>>>>>> Anthony
> > >>>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>
> > >>>
> > >>
> >
> >
>


[PROPOSAL] Add mod and arithmetic functionality to OQL

2018-02-06 Thread Jason Huynh
This is a proposal to add the ability to execute arithmetic operations on
numeric fields in an OQL query.  The corresponding jira ticket is:
https://issues.apache.org/jira/browse/GEODE-4327

The operation symbols that will be added are mod, %, +, -, /, *.  These
will correspond to modulo, modulo, addition, subtraction, division,
multiplication operations.

There is an open PR for this at: https://github.com/apache/geode/pull/1316


Re: [PROPOSAL] Add mod and arithmetic functionality to OQL

2018-02-06 Thread Jason Huynh
Based on the oql.g file I think it would but I can add some tests to be sure

On Tue, Feb 6, 2018 at 2:04 PM Anthony Baker  wrote:

> Will the OQL engine obey operator precedence rules?
>
> Anthony
>
>
> > On Feb 6, 2018, at 10:22 AM, Jason Huynh  wrote:
> >
> > This is a proposal to add the ability to execute arithmetic operations on
> > numeric fields in an OQL query.  The corresponding jira ticket is:
> > https://issues.apache.org/jira/browse/GEODE-4327
> >
> > The operation symbols that will be added are mod, %, +, -, /, *.  These
> > will correspond to modulo, modulo, addition, subtraction, division,
> > multiplication operations.
> >
> > There is an open PR for this at:
> https://github.com/apache/geode/pull/1316
>
>


Re: Geode unit tests 'develop/DistributedTest' took too long to execute

2018-02-12 Thread Jason Huynh
I logged a ticket for this hang:
https://issues.apache.org/jira/browse/GEODE-4650  From what I can tell, it
looks like some sort of race condition where the DLockService is stuck
trying to clearGrantor.

On Fri, Feb 9, 2018 at 10:20 PM  wrote:

> Pipeline results can be found at:
>
> Concourse:
> https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/135
>
>


Re: [DISCUSS] changes to registerInterest API

2018-02-20 Thread Jason Huynh
While doing this work, it looks like we have an equivalent
unregisterInterest(K key) that accepts a List of keys or a single key.  To
keep things consistent with registeringInterest, I propose that we
deprecate the behavior for passing in a list as the key and we introduce:

unregisterInterestForKeys(Iterable keys)


On Fri, Dec 1, 2017 at 7:55 AM Anthony Baker  wrote:

> I think Dan’s suggestion clarifies the intent.
>
> Anthony
>
>
> > On Nov 30, 2017, at 3:54 PM, Dan Smith  wrote:
> >
> > I think it should be registerInterestAll(Iterablekeys) to mirror
> > Collection.addAll.
> >
> > I could easily see a user creating their own Tuple class or using one
> that
> > happens to also implement Iterable as their key.
> >
> > We're not doing a very good job of discouraging people to use iterable
> > objects as their key if they work everywhere else and then break with
> this
> > one API.
> >
> > -Dan
> >
> > On Thu, Nov 30, 2017 at 2:45 PM, John Blum  wrote:
> >
> >> No, no... good question.
> >>
> >> I just think it would be wiser if users created a single, CompositeKey
> >> class type, with properly implements equals and hashCode methods, as I
> >> pointed out.
> >>
> >> I don't see any advantage in using a java.util.Collection as a key over
> >> implementing a CompositeKey type.
> >>
> >> As such, anything we can do to discourage users from using Collection
> types
> >> as a key, I think is a good thing.
> >>
> >>
> >> On Thu, Nov 30, 2017 at 2:35 PM, Jason Huynh  wrote:
> >>
> >>> Yeah I am not sure if anyone does it,and I don't think it would be a
> good
> >>> idea to use a collection as a key but just thought I'd ask the
> >> question...
> >>>
> >>> On Thu, Nov 30, 2017 at 2:33 PM John Blum  wrote:
> >>>
> >>>> For instance, this came up recently...
> >>>>
> >>>>
> >>>> https://stackoverflow.com/questions/46551278/gemfire-
> >>> composite-key-pojo-as-gemfire-key
> >>>>
> >>>> I have seen other similar posts too!
> >>>>
> >>>>
> >>>>
> >>>> On Thu, Nov 30, 2017 at 2:30 PM, John Blum  wrote:
> >>>>
> >>>>> Does anyone actually do this in practice?  If so, yikes!
> >>>>>
> >>>>> Even if the List is immutable, the elements may not be, so using a
> >> List
> >>>> as
> >>>>> a key starts to open 1 up to a lot of problems.
> >>>>>
> >>>>> As others have pointed out in SO and other channels, information
> >> should
> >>>>> not be kept in the key.
> >>>>>
> >>>>> It is perfect fine to have a "Composite" Key, but then define a
> >>>>> CompositeKey class type with properly implemented equals(:Object) and
> >>>>> hashCode():int methods.
> >>>>>
> >>>>> For the most part, Keys should really only ever be simple Scalar
> >> values
> >>>>> (e.g. Long, String, etc).
> >>>>>
> >>>>> -j
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Thu, Nov 30, 2017 at 2:25 PM, Jason Huynh 
> >>> wrote:
> >>>>>
> >>>>>> I started work on the following plan:
> >>>>>> - deprecate current "ALL_KEYS" and List passing behavior in
> >>>>>> registerInterest
> >>>>>> ()
> >>>>>> - add registerInterestForAllKeys();
> >>>>>> - add registerInterest(T... keys)
> >>>>>> - add registerInterest(Iterablekeys)
> >>>>>>
> >>>>>> I might be missing something here but:
> >>>>>> With the addition of registerInterest(Iterable keys), I think we
> >>>> would
> >>>>>> not be able to register interest a List as the key itself.  A list
> >>> would
> >>>>>> be
> >>>>>> iterated over due to the addition of registerInterest(Iterable
> >>> keys).
> >>>>>> A
> >>>>>> list in a list would be passed into registerInterest and again be
> >>>> iterated
> >>>>>> over.  I could change the newly cr

Re: [Proposal] Thread monitoring mechanism

2018-02-21 Thread Jason Huynh
I am assuming this would be for all thread/thread pools and not specific to
Function threads.  I wonder what the impact would be for put/get operations
or are we going to target specific operations.



On Tue, Feb 20, 2018 at 1:04 AM Gregory Vortman 
wrote:

> Hello team,
> One of the most severe issues hitting our real time application is thread
> stuck for multiple reasons, such as long lasting locks, deadlocks, threads
> which wait for reply forever in case of packet drop issue etc...
> Such kind of stuck are under Radar of the existing system health check
> methods.
> In mission critical applications, this will be resulted as an immediate
> outage.
>
> As a short we are implementing kind of internal watch dog mechanism for
> stuck detector:
>There is a registration object
>Function executor having start/end hooks to
> register/unregister the thread via the registration object
> Customized Monitoring scheduled thread is spawned on startup. The thread
> to wake up every N seconds, to scan the registration map and to detect
> unregistered threads for a long time (configurable).
> Once such threads has been detected, process stack is taken and thread
> stack statistic metric is provided.
>
> This helps us to monitor, detect and take fast decision about the action
> which should be taken - usually it is member bounce decision (consistency
> issue is possible, in our case it is better than deny of service).
> The above solution is not touching GEODE core code, but implemented in
> boundaries of customized code only.
>
> I would like to raise a proposal to introduce a long term generic thread
> monitoring mechanism, to detect threads which are stuck for any reason.
> To maintain a monitoring object having a start/end methods to be invoked
> similarly to FunctionStats.startFunctionExecution and
> FunctionStats.endFunctionExecution.
>
> Your feedback would be appreciated
>
> Thank you for cooperation.
> Best regards!
>
> Gregory Vortman
>
> This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
>
> you may review at https://www.amdocs.com/about/email-disclaimer <
> https://www.amdocs.com/about/email-disclaimer>
>


Re: [PROPOSAL] use default value for validate-serializable-objects in dunit

2018-03-15 Thread Jason Huynh
+1 agreement with Kirk and Sean.

Any non default configuration should probably have it's own set of tests.
I can understand some exploratory work where someone might want to run the
whole precheckin with a non default value to help identify areas that they
may have missed or are unexpectedly affected.  At that point the relevant
tests should have been identified and copies of them could be made with the
new configuration.



On Thu, Mar 15, 2018 at 9:46 AM Sean Goller  wrote:

> I agree with this. We should have a default state that reflects an “out of
> the box” configuration, and if tests expects a different configuration, it
> should manage that within the context of the test.
>
> -Sean
>
> On Tue, Mar 13, 2018 at 10:04 AM Kirk Lund  wrote:
>
> > I want to propose using the default value for
> validate-serializable-object
> > in dunit tests instead of forcing it on for all dunit tests. I'm
> > sympathetic to the reason why this was done: ensure that all existing
> code
> > and future code will function properly with this feature enabled.
> > Unfortunately running all dunit tests with it turned on is not a good way
> > to achieve this.
> >
> > Here are my reasons:
> >
> > 1) All tests should start out with the same defaults that Users have. If
> we
> > don't do this, we are going to goof up sometime and release something
> that
> > only works with this feature turned on or worsen the User experience of
> > Geode in some way.
> >
> > 2) All tests should have sovereign control over their own configuration.
> We
> > should strive towards being able to look at a test and determine its
> config
> > at a glance without having to dig through the framework or other classes
> > for hidden configuration. We continue to improve dunit in this area but I
> > think adding to the problem is going in the wrong direction.
> >
> > 3) It sets a bad precedent. Do we follow this approach once or keep
> adding
> > additional non-default features when we need to test them too? Next one
> is
> > GEODE-4769 "Serialize region entry before putting in local cache" which
> > will be disabled by default in the next Geode release and yet by turning
> it
> > on by default for all of precheckin we were able to find lots of problems
> > in both the product code and test code.
> >
> > 4) This is already starting to cause confusion for developers thinking
> its
> > actually a product default or expecting it to be enabled in other
> > (non-dunit) tests.
> >
> > Alternatives for test coverage:
> >
> > There really are no reasonable shortcuts for end-to-end test coverage of
> > any feature. We need to write new tests or identify existing tests to
> > subclass with the feature enabled.
> >
> > 1) Subclass specific tests to turn validate-serializable-object on for
> that
> > test case. Examples of this include a) dunit tests that execute Region
> > tests with OffHeap enabled, b) dunit tests that flip on HTTP over GFSH,
> c)
> > dunit tests that run with SSL or additional security enabled.
> >
> > 2) Write new tests that cover all features with
> > validate-serializable-object
> > enabled.
> >
> > 3) Rerun all of dunit with and without the option. This doesn't sound
> very
> > reasonable to me, but it's the closest to what we really want or need.
> >
> > Any other ideas or suggestions other than forcing all dunit tests to run
> > with a non-default value?
> >
> > Thanks,
> > Kirk
> >
>


Re: Index on Region

2018-04-16 Thread Jason Huynh
Hi Jinmei,

I am not sure whether these elements were deprecated or not.  I know that
they were at one time valid and a user could specify the following in their
app at one point:


   


I believe the "new" way to do this would be:



How would deprecation for this work? Would your roll a new version of
the new definition/scheme?


On Mon, Apr 16, 2018 at 10:28 AM Jinmei Liao  wrote:

> From the cache-1.0.xsd, we noticed that an index can have element like
> "functional" and "primary-key", but the docs did not mention anything about
> it (
>
> https://geode.apache.org/docs/guide/13/reference/topics/cache_xml.html#region
> ).
> I am wondering if these are deprecated? Would it be better for the the xml
> created by the cluster configuration not consist any of these?
>
> 
>   
> 
>   
> A functional type of index needs a from-clause, expression
> which are mandatory.
> The import string is used for specifying the type of Object in
> the region or
> the type of Object which the indexed expression evaluates to.
>   
> 
> 
>   
>   
>   
> 
>   
>
>   
> 
>   
> A primary-key type of index needs a field attribute which is
> mandatory.
> There should be only one or zero primary-index defined for a region
>   
> 
> 
>   
> 
>   
> 
>
>
>
> --
> Cheers
>
> Jinmei
>


Re: [Proposal]: behavior change when region doesn't exist in cluster configuration

2018-04-27 Thread Jason Huynh
Hi Jinmei and Naba,

I don't think you can define two regions with the same name and different
types.  We would throw an IllegalStateException for the node that tried to
create the region second.  At least that was the behavior I was seeing when
I tried to create a replicate region and a partitioned region with the same
name (admittedly using the java api and not cluster config).  So then if
you run a gfsh command to create an index, it would only create the index
on the first node and report back to cluster config the first nodes
configuration and the new index.

Going back to the original proposal, if the user ever wanted to have the
cluster config updated with the region, how would they sync up their
cluster config without bringing everything down?  Sure they can export xml
and bring up another server with xml but that doesn't get them migrated to
cluster config...



On Fri, Apr 27, 2018 at 2:48 PM Jinmei Liao  wrote:

> Hi, Naba, I believe this is possible even before, with or without using
> cluster configuration. That's why we have to say stuff defined using
> cache.xml is not mean to  be a cluster wide configuration.
>
> On Fri, Apr 27, 2018 at 2:40 PM, Nabarun Nag  wrote:
>
> > With the new implementation, will it allow two different regions with the
> > same exact name to exist in the cluster ?
> >
> > I meant like server A had a cache.xml with region “test” with different
> > properties as to the cache.xml in server B for region “test”. And the
> > locator had an empty cluster config.
> >
> > So now when servers A and B start, they will have different regions but
> the
> > same name “test”. Because there is no sync up with locator’s cluster
> > config.
> >
> > Pardon me if my understanding is wrong.
> >
> >
> > Regards
> > Nabarun
> >
> >
> > On Fri, Apr 27, 2018 at 2:23 PM Jinmei Liao  wrote:
> >
> > > My point is: we can't keep "mending" the wrong behavior, otherwise we
> can
> > > not move forward.
> > >
> > > On Fri, Apr 27, 2018 at 2:22 PM, Jinmei Liao 
> wrote:
> > >
> > > > So the current behavior is, when a customer starts a server with
> > > cache.xml
> > > > that has a region defined, and then later on issues a gfsh command
> > > `create
> > > > index` on that region, the command output would be something like:
> > > > >create index .
> > > > Member   |   Status
> > > > 
> > > > server-1   |  Index successfully created.
> > > >
> > > > Cluster configuration for "cluster" is not updated
> > > > Region XYZ does not exist in the cluster configuration (or something
> to
> > > > this effect).
> > > >
> > > > The command result would tell user exactly what happened and what not
> > > > happened. The point is, a server's own cache.xml is NOT meant to be a
> > > > "cluster" wide configuration. If customer is in the habit of starting
> > up
> > > > server with cache.xml but yet still want to have consistent
> > region/index
> > > > defined in the cluster, export a server's config and use that to
> start
> > > > another server.
> > > >
> > > >
> > > >
> > > > On Fri, Apr 27, 2018 at 2:03 PM, Diane Hardman 
> > > > wrote:
> > > >
> > > >> I talked with Barbara and understand the long term effort to
> deprecate
> > > >> cache.xml in favor of cluster config and I heartily agree.
> > > >> I think a good step in that direction is to provide a migration tool
> > for
> > > >> users that reads all cache.xml files for current members and store
> > them
> > > in
> > > >> cluster config, throwing exceptions and logging errors when region
> > > >> definitions conflict (for the same region name) on different servers
> > in
> > > >> the
> > > >> same cluster.
> > > >> We might then consider removing the cache.xml files and rely on gfsh
> > and
> > > >> (in the future, hopefully) Java API's to keep cluster config
> > up-to-date.
> > > >>
> > > >> Thanks!
> > > >>
> > > >> On Fri, Apr 27, 2018 at 12:56 PM, Jinmei Liao 
> > > wrote:
> > > >>
> > > >> > The decision is to go with the new behavior (I believe :-)).  The
> > > region
> > > >> > does not exist in the cluster configuration to begin with since
> it's
> > > not
> > > >> > created using gfsh, so we have no way of checking unless we make
> an
> > > >> extra
> > > >> > trip to the region to find out what kind of region it is, but
> again
> > > >> > different server might have different opinion about what it is.
> > > >> >
> > > >> > On Fri, Apr 27, 2018 at 12:49 PM, Diane Hardman <
> > dhard...@pivotal.io>
> > > >> > wrote:
> > > >> >
> > > >> > > Since we are working on enhancing Lucene support to allow
> adding a
> > > >> Lucene
> > > >> > > index to an existing region containing data, I am very
> interested
> > in
> > > >> the
> > > >> > > decision here.
> > > >> > > Like Mike, I also prefer keeping the original behavior of
> updating
> > > >> > cluster
> > > >> > > config with both the region and the index if it was not there
> > > before.
> > > >> > > Is there something preventing you from checking cluster config
> > fo

Re: [Proposal]: behavior change when region doesn't exist in cluster configuration

2018-04-27 Thread Jason Huynh
*correction to my last email, I was using java api and not cache.xml

On Fri, Apr 27, 2018 at 3:40 PM Jason Huynh  wrote:

> Hi Jinmei and Naba,
>
> I don't think you can define two regions with the same name and different
> types.  We would throw an IllegalStateException for the node that tried to
> create the region second.  At least that was the behavior I was seeing when
> I tried to create a replicate region and a partitioned region with the same
> name (admittedly using the java api and not cluster config).  So then if
> you run a gfsh command to create an index, it would only create the index
> on the first node and report back to cluster config the first nodes
> configuration and the new index.
>
> Going back to the original proposal, if the user ever wanted to have the
> cluster config updated with the region, how would they sync up their
> cluster config without bringing everything down?  Sure they can export xml
> and bring up another server with xml but that doesn't get them migrated to
> cluster config...
>
>
>
> On Fri, Apr 27, 2018 at 2:48 PM Jinmei Liao  wrote:
>
>> Hi, Naba, I believe this is possible even before, with or without using
>> cluster configuration. That's why we have to say stuff defined using
>> cache.xml is not mean to  be a cluster wide configuration.
>>
>> On Fri, Apr 27, 2018 at 2:40 PM, Nabarun Nag  wrote:
>>
>> > With the new implementation, will it allow two different regions with
>> the
>> > same exact name to exist in the cluster ?
>> >
>> > I meant like server A had a cache.xml with region “test” with different
>> > properties as to the cache.xml in server B for region “test”. And the
>> > locator had an empty cluster config.
>> >
>> > So now when servers A and B start, they will have different regions but
>> the
>> > same name “test”. Because there is no sync up with locator’s cluster
>> > config.
>> >
>> > Pardon me if my understanding is wrong.
>> >
>> >
>> > Regards
>> > Nabarun
>> >
>> >
>> > On Fri, Apr 27, 2018 at 2:23 PM Jinmei Liao  wrote:
>> >
>> > > My point is: we can't keep "mending" the wrong behavior, otherwise we
>> can
>> > > not move forward.
>> > >
>> > > On Fri, Apr 27, 2018 at 2:22 PM, Jinmei Liao 
>> wrote:
>> > >
>> > > > So the current behavior is, when a customer starts a server with
>> > > cache.xml
>> > > > that has a region defined, and then later on issues a gfsh command
>> > > `create
>> > > > index` on that region, the command output would be something like:
>> > > > >create index .
>> > > > Member   |   Status
>> > > > 
>> > > > server-1   |  Index successfully created.
>> > > >
>> > > > Cluster configuration for "cluster" is not updated
>> > > > Region XYZ does not exist in the cluster configuration (or
>> something to
>> > > > this effect).
>> > > >
>> > > > The command result would tell user exactly what happened and what
>> not
>> > > > happened. The point is, a server's own cache.xml is NOT meant to be
>> a
>> > > > "cluster" wide configuration. If customer is in the habit of
>> starting
>> > up
>> > > > server with cache.xml but yet still want to have consistent
>> > region/index
>> > > > defined in the cluster, export a server's config and use that to
>> start
>> > > > another server.
>> > > >
>> > > >
>> > > >
>> > > > On Fri, Apr 27, 2018 at 2:03 PM, Diane Hardman > >
>> > > > wrote:
>> > > >
>> > > >> I talked with Barbara and understand the long term effort to
>> deprecate
>> > > >> cache.xml in favor of cluster config and I heartily agree.
>> > > >> I think a good step in that direction is to provide a migration
>> tool
>> > for
>> > > >> users that reads all cache.xml files for current members and store
>> > them
>> > > in
>> > > >> cluster config, throwing exceptions and logging errors when region
>> > > >> definitions conflict (for the same region name) on different
>> servers
>> > in
>> > > >> the
>> > > >> same cluster.
>> > 

  1   2   3   4   >