Build Jobs not accessible?

2018-11-09 Thread Ju@N
Hello team, I've submitted https://github.com/apache/geode/pull/2822 a couple of hours ago and the concourse-ci/IntegrationTest failed, even when *./gradlew integrationTest* runs fine locally. I've tried to access the failed build ( https://concourse.apachegeode-ci.info/builds/14585) to see the ou

Re: Build Jobs not accessible?

2018-11-09 Thread Jens Deppe
Hi Juan. It could be a cookie problem - have you tried clearing them? Can you access the top-level https://concourse.apachegeode-ci.info? I can access your build just fine. --Jens On Fri, Nov 9, 2018 at 5:52 AM Ju@N wrote: > Hello team, > > I've submitted https://github.com/apache/geode/pull/

Re: Build Jobs not accessible?

2018-11-09 Thread Juan José Ramos
Hello Jens, Yes, I've tried from different browsers after deleting all cookies, history and data, same result. I can access https://concourse.apachegeode-ci.info without any problems, but I only see *apache-develop-main*, *apache-develop-metrics*, *apache-develop-examples* and *apache-develop-imag

Re: Build Jobs not accessible?

2018-11-09 Thread Jacob Barrett
I bet that after we regenerated it yesterday it isn’t public anymore. I’ll fix that shortly. -Jake > On Nov 9, 2018, at 6:59 AM, Juan José Ramos wrote: > > Hello Jens, > > Yes, I've tried from different browsers after deleting all cookies, history > and data, same result. > I can access http

Re: Build Jobs not accessible?

2018-11-09 Thread Juan José Ramos
Thanks Jake, Cheers. On Fri, Nov 9, 2018 at 3:14 PM Jacob Barrett wrote: > I bet that after we regenerated it yesterday it isn’t public anymore. I’ll > fix that shortly. > > -Jake > > > > On Nov 9, 2018, at 6:59 AM, Juan José Ramos wrote: > > > > Hello Jens, > > > > Yes, I've tried from differ

Re: [DISCUSS] Cutting 1.8 release branch

2018-11-09 Thread Jason Huynh
I just merged a change last night for GEODE-5884 that I think should make it into the release. On Thu, Nov 8, 2018 at 10:33 AM Anthony Baker wrote: > I’m working on a review of LICENSE and NOTICE. Looks like a few things > slipped in that need to be declared. > > Anthony > > > > On Nov 2, 2018,

Re: [DISCUSS] Cutting 1.8 release branch

2018-11-09 Thread Jason Huynh
I also think that the PR https://github.com/apache/geode/pull/2818, or something that fixes this race, should make it into the release On Fri, Nov 9, 2018 at 8:59 AM Jason Huynh wrote: > I just merged a change last night for GEODE-5884 that I think should make > it into the release. > > On Thu,

Re: Lombok

2018-11-09 Thread Jinmei Liao
So users who wish to download our code will need to read some documentation on how to setup IDEA/Eclipse before they can compile. It's not a simple "import and work with default IDEA setup" now. +0 on this. Personally I would prefer to bear with the extra getter/setters than giving new contributor

Re: Lombok

2018-11-09 Thread Anthony Baker
I was talking with Dale and he pointed me to this discussion: https://github.com/jhipster/generator-jhipster/issues/398 I think it probably warrants more investigation (e.g. do the issues decried on the the internet still exist?) before

Re: Lombok

2018-11-09 Thread Kirk Lund
-1 Personally, I prefer to being able to see and manipulate ALL code. I hate autogenerated code, and I don't mind boilerplate code. When I see a class that has too many getters, then I see that as a sign that we need to do some refactoring and correct the design of that class. Using Lombok would hi

Re: Apache Geode Branch Housekeeping

2018-11-09 Thread Michael Stolz
The branch named "master" claims to be mine. I don't exactly know how it became "mine". -- Mike Stolz Principal Engineer, GemFire Product Lead Mobile: +1-631-835-4771 On Thu, Nov 8, 2018 at 7:55 PM Patrick Rhomberg wrote: > Hello all! > > We currently have 182 branches on the apache/geode r

Re: Lombok

2018-11-09 Thread Aditya Anchuri
I apologize, I was slightly wrong earlier with regards to the the IntelliJ Lombok plugin -- people will not need the IntelliJ plugin for their code to compile -- but they will need to enable compile-time annotation processing as an option. The plugin is a nice-to-have. On Fri, Nov 9, 2018 at 9:4

Re: Lombok

2018-11-09 Thread Aditya Anchuri
I'm happy to put this work on hold for now. As I mentioned there are other benefits beyond just getters and setters -- but I guess the risk with JDK10 (I was unaware of the problems between Lombok and JDK10) may be unknown enough for us to take pause. Thanks for all your input! -Aditya On Fri, No

RED ALARM: build has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
Now that I have your attention ;) I'd like to make broken builds a BIGGER DEAL! Nobody can merge to DEVELOP until I announce that the RED ALARM is turned off and the Build is fixed. Thank you, Kirk GEODE-6024: Develop build fails due to checkPom (unknown dependency added) https://issues.apache.or

RED ALARM: DistributedTestOpenJDK8 has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
This test has failed in DistributedTestOpenJDK8 #98. Nobody can merge to DEVELOP until I announce that the RED ALARM is turned off and the Build is fixed. There is an exception: the fix for this test can be committed to DEVELOP. Thank you, Kirk GEODE-5943: EvictionDUnitTest failed in jdk11 ht

Re: RED ALARM: build has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
PS: The fix for checkPom an be committed to DEVELOP. On Fri, Nov 9, 2018 at 10:14 AM, Kirk Lund wrote: > Now that I have your attention ;) I'd like to make broken builds a BIGGER > DEAL! Nobody can merge to DEVELOP until I announce that the RED ALARM is > turned off and the Build is fixed. > >

Re: RED ALARM: build has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
Build checkPom has been restored to GREEN. On Fri, Nov 9, 2018 at 10:21 AM, Kirk Lund wrote: > PS: The fix for checkPom an be committed to DEVELOP. > > > On Fri, Nov 9, 2018 at 10:14 AM, Kirk Lund wrote: > >> Now that I have your attention ;) I'd like to make broken builds a BIGGER >> DEAL! No

[DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Galen O'Sullivan
I was looking at a test recently that extended JUnit4CacheTestCase and only used a single server, and changed it to use ClusterStartupRule. JUnit4CacheTestCase adds additional complexity to JUnit4DistributedTestCase and with the move to ClusterStartupRule for distributed tests, rather than class i

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Jinmei Liao
+1 on this. We should break away from extending base test class now. It makes it quite hard to understand how the test cluster is set up and what the test is doing. On Fri, Nov 9, 2018 at 10:50 AM Galen O'Sullivan wrote: > I was looking at a test recently that extended JUnit4CacheTestCase and on

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Kirk Lund
*I would like to encourage all Geode developers to start writing tests directly against the Geode User APIs* even in DistributedTests. I'm no longer using *CacheRule, ClientCacheRule, ServerStarterRule, LocatorStarterRule, or ClusterStarterRule* and I'm against encouraging their use any longer. I'l

Re: RED ALARM: DistributedTestOpenJDK8 has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Jinmei Liao
It is a flaky test, I have a PR that would give more detailed error message when it fails: https://github.com/apache/geode/pull/2808 On Fri, Nov 9, 2018 at 10:47 AM Kirk Lund wrote: > This test has failed in DistributedTestOpenJDK8 #98. Nobody can merge to > DEVELOP until I announce that the RE

Re: RED ALARM: DistributedTestOpenJDK8 has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
Thanks! I'm waiting for Build #99 to finish (hopefully green) and I'll send out one more update on this. On Fri, Nov 9, 2018 at 11:33 AM, Jinmei Liao wrote: > It is a flaky test, I have a PR that would give more detailed error message > when it fails: > > https://github.com/apache/geode/pull/280

Re: RED ALARM: DistributedTestOpenJDK8 has failed -- no checkins to DEVELOP allowed

2018-11-09 Thread Kirk Lund
DistributedTestOpenJDK8 #99 is *green* and thanks to Jinmei we have a PR filed for improving EvictionDUnitTest. Please consider the RED ALARM resolved. https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/99 On Fri, Nov 9, 2018 at 11:

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Bruce Schuchardt
I agree with Kirk about the Rules and I agree with Galen about moving away from the old abstract test classes.  I think that is also in the spirit of what Kirk is saying. There are also tests that have complicated methods for creating caches and regions.  These methods have many parameters and

[DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Dan Smith
Hi all, Kirks emails reminded me - I think we are at the point now with our PR checks where we should not be merging anything to develop that doesn't pass all of the PR checks. I propose we disable the merge button unless a PR is passing all of the checks. If we are in agreement I'll follow up wi

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Robert Houghton
+1 On Fri, Nov 9, 2018, 12:55 Dan Smith Hi all, > > Kirks emails reminded me - I think we are at the point now with our PR > checks where we should not be merging anything to develop that doesn't pass > all of the PR checks. > > I propose we disable the merge button unless a PR is passing all of

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Kirk Lund
+1 On Fri, Nov 9, 2018 at 12:58 PM, Robert Houghton wrote: > +1 > > On Fri, Nov 9, 2018, 12:55 Dan Smith > > Hi all, > > > > Kirks emails reminded me - I think we are at the point now with our PR > > checks where we should not be merging anything to develop that doesn't > pass > > all of the PR

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Owen Nichols
+1 assuming “all of the PR checks” will soon include Java 11 Implementation-wise should be as simple at getting someone with admin privileges on the geode repo to change the settings for “develop” branch to "Require status checks to pass before merging." > On Nov 9, 2018, at 12:58 PM, Robert Ho

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Alexander Murmann
+1 On Fri, Nov 9, 2018 at 12:58 PM Robert Houghton wrote: > +1 > > On Fri, Nov 9, 2018, 12:55 Dan Smith > > Hi all, > > > > Kirks emails reminded me - I think we are at the point now with our PR > > checks where we should not be merging anything to develop that doesn't > pass > > all of the PR

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Kirk Lund
We need to pull out the DUnit Blackboard from DistributedTestCase and repackage it as a BlackboardRule. It makes sense to make that a JUnit Rule because it's not part of Geode or its User APIs but it's really useful for distributed testing in general. It's also probably the last useful thing that's

Re: [DISCUSS] including Java11 in the pipelines

2018-11-09 Thread Kirk Lund
+1 I added my approval to your PRs. I don't know if the yml is correct or not, but I support the overall goal. On Thu, Nov 8, 2018 at 1:19 PM, Owen Nichols wrote: > Sounds like the overwhelming consensus is to keep Java11 gating, add > Java11 to PR, and defer non-gating tests. > > I have prepare

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Jinmei Liao
I like using the rules because it reduces boiler plate code and the chance of not cleaning up properly after your tests. It also make what you are really trying to do in the test stand out more in the test code. On Fri, Nov 9, 2018 at 1:37 PM Kirk Lund wrote: > We need to pull out the DUnit Blac

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Anthony Baker
It looks like the current failure rate (post-PR, including all types of failures) for DistributedTest is around 25%. Do most people experience similar failure rates on the *-pr pipeline? I’m specifically wondering about failures unrelated to your changes. Anthony > On Nov 9, 2018, at 12:55

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Helena Bales
+1 As far as the failure rate goes, even if the rate is 25% restarting the pipeline usually gives a passing run. And this would make us less likely to incorrectly dismiss a failure as not related. If the second build also fails with the same failures, those should be investigated and resolved in so

[DISCUSS] LGTM on pull requests

2018-11-09 Thread Bruce Schuchardt
I'd like to see LGTM run on pull requests.  Otherwise I think we're fighting a losing battle trying to improve the quality of our code. For instance, we just had a nice contribution of geospatial functionality that raised 5 alerts, but we only found out about it after the code was merged to dev

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Kirk Lund
I would love to see you apply some of your passion for that to improving the User APIs so there's less boiler plate code for the Users as well. Please don't forget that to Developers who are not familiar with our Rules such as ClusterStarterRule, that it can be very difficult to understand what it

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Kirk Lund
FindBugs is another one that could potentially help some. I get tired of adding reviews that say "Please change this member variable to be private" -- tools like LGTM and FindBugs can help guide us to better code and prevent us from losing ground in something like the improvements you all have made

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Aditya Anchuri
+1, although I do wonder about the overhead of making PRs increasing more than it already feels like to me as a new contributor (as the person who made the geospatial contribution). If this was a gradle task maybe like spotless? On Fri, Nov 9, 2018 at 2:20 PM Bruce Schuchardt wrote: > I'd like t

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Kirk Lund
Well, we could run it periodically such as weekly rather than as part of the main pipeline or precheckin. On Fri, Nov 9, 2018 at 2:32 PM, Aditya Anchuri wrote: > +1, although I do wonder about the overhead of making PRs increasing more > than it already feels like to me as a new contributor (as

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Alexander Murmann
I don't have strong opinions on this, but I am always suspect of CI jobs that indicate quality that only run periodically. If the job discovers something that needs improvement who is going to do the work and when? On Fri, Nov 9, 2018 at 2:36 PM Kirk Lund wrote: > Well, we could run it periodica

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Nabarun Nag
As per running periodically , LGTM runs it every Monday. As for who would fix it, LGTM mentions which commit caused the failure and who was the author of it. So i think its the author's responsibility to fix it. Personally, LGTM list a table that shows how many alerts we caused by which author

Re: [DISCUSS] LGTM on pull requests

2018-11-09 Thread Jacob Barrett
I opened a ticket with infra earlier this week to enable PR integration. There hasn’t been any movement. > On Nov 9, 2018, at 3:00 PM, Nabarun Nag wrote: > > As per running periodically , LGTM runs it every Monday. > > As for who would fix it, LGTM mentions which commit caused the failure and

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Helena Bales
+1 for deprecating old test bases. Many of the tests that gave me the most trouble this summer were because of JUnit4CacheTestCase. Also +1 for pulling out Blackboard into a rule. I will, however, argue for continuing to use ClusterStartupRule. The benefit of that is that it makes sure that all JV

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Galen O'Sullivan
I wonder if we could use some sort of assertions to validate that that tests have cleaned up, at least in a few ways? For example, if a Cache/Locator/DistributedSystem is still running after a test has finished, that's a good sign of a dirty environment. If system properties are still set, that's a

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Kenneth Howe
+1 > On Nov 9, 2018, at 2:07 PM, Helena Bales wrote: > > +1 > As far as the failure rate goes, even if the rate is 25% restarting the > pipeline usually gives a passing run. And this would make us less likely to > incorrectly dismiss a failure as not related. If the second build also > fails wit

Re: [DISCUSS] Deprecating JUnit4CacheTestCase (and maybe JUnit4DistributedTestCase?)

2018-11-09 Thread Jinmei Liao
Yeah, I guess. But why going out of this way to avoid using rules? Why rules are bad? Rules just encapsulate common befores and afters to avoid duplicate code in every tests. I don't see why we should avoid using it. On Fri, Nov 9, 2018, 3:44 PM Galen O'Sullivan I wonder if we could use some sort

Re: Our use of Jira "Fix Version/s"

2018-11-09 Thread Helena Bales
+1 to Alexander's suggestion. I don't know where to put stuff like that, but once we have one we should link to it from our new CIO instructions page. On Wed, Nov 7, 2018 at 2:59 PM Alexander Murmann wrote: > Kirk, I think most of us are in agreement that it works as you are > describing. The no

RED ALERT: EvictionDUnitTest failed again DistributedTestOpenJDK8 #101

2018-11-09 Thread Kirk Lund
GEODE-5943: EvictionDUnitTest failed in jdk11 https://issues.apache.org/jira/browse/GEODE-5943 https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/101 org.apache.geode.internal.cache.eviction.EvictionDUnitTest > testDummyInlineNCentr

Re: [DISCUSS] Disable merge for failing pull requests

2018-11-09 Thread Jacob Barrett
-1 I think there is still plenty of flakiness in the tests that PRs can go red due to something flaky. Blocking a merge when the issue is known to be flaky would be disruptive to progress. I think it will also just encourage someone to use direct pushing to work around false reds. This might lead

Re: [DISCUSS] including Java11 in the pipelines

2018-11-09 Thread Owen Nichols
PRs created/updated after 4pm today will now get the additional Java11 checks :) The non-gating jobs have also been shifted right in the pipeline (this is only visible when you click on the “complete