Build failed in Jenkins: Geode-release #42

2017-02-02 Thread Apache Jenkins Server
See 

Changes:

[upthewaterspout] GEODE-2386: Wait until classpath doesn't contain 
gradle-worker.jar

--
[...truncated 674 lines...]
:geode-cq:compileTestJavaNote: Some input files use or override a deprecated 
API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:geode-cq:processTestResources
:geode-cq:testClasses
:geode-cq:checkMissedTests
:geode-cq:spotlessJavaCheck
:geode-cq:spotlessCheck
:geode-cq:test
:geode-cq:check
:geode-cq:build
:geode-cq:distributedTest
:geode-cq:flakyTest
:geode-cq:integrationTest
:geode-json:assemble
:geode-json:compileTestJava UP-TO-DATE
:geode-json:processTestResources UP-TO-DATE
:geode-json:testClasses UP-TO-DATE
:geode-json:checkMissedTests UP-TO-DATE
:geode-json:spotlessJavaCheck
:geode-json:spotlessCheck
:geode-json:test UP-TO-DATE
:geode-json:check
:geode-json:build
:geode-json:distributedTest UP-TO-DATE
:geode-json:flakyTest UP-TO-DATE
:geode-json:integrationTest UP-TO-DATE
:geode-junit:javadoc
:geode-junit:javadocJar
:geode-junit:sourcesJar
:geode-junit:signArchives SKIPPED
:geode-junit:assemble
:geode-junit:compileTestJava
:geode-junit:processTestResources UP-TO-DATE
:geode-junit:testClasses
:geode-junit:checkMissedTests
:geode-junit:spotlessJavaCheck
:geode-junit:spotlessCheck
:geode-junit:test
:geode-junit:check
:geode-junit:build
:geode-junit:distributedTest
:geode-junit:flakyTest
:geode-junit:integrationTest
:geode-lucene:assemble
:geode-lucene:compileTestJavaNote: Some input files use or override a 
deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:geode-lucene:processTestResources
:geode-lucene:testClasses
:geode-lucene:checkMissedTests
:geode-lucene:spotlessJavaCheck
:geode-lucene:spotlessCheck
:geode-lucene:test
:geode-lucene:check
:geode-lucene:build
:geode-lucene:distributedTest

org.apache.geode.cache.lucene.internal.management.LuceneManagementDUnitTest > 
classMethod FAILED
java.lang.RuntimeException: Unable to launch dunit VMs

Caused by:
java.lang.RuntimeException: VMs did not start up within 120 seconds

org.apache.geode.cache.lucene.LuceneIndexCreationOnFixedPRDUnitTest > 
classMethod FAILED
java.lang.RuntimeException: Unable to launch dunit VMs

Caused by:
java.lang.RuntimeException: VMs did not start up within 120 seconds

org.apache.geode.cache.lucene.LuceneQueriesPeerPRRedundancyDUnitTest > 
classMethod FAILED
java.lang.RuntimeException: Unable to launch dunit VMs

Caused by:
java.lang.RuntimeException: VMs did not start up within 120 seconds

org.apache.geode.cache.lucene.LuceneQueriesPeerPRDUnitTest > classMethod FAILED
java.lang.RuntimeException: Unable to launch dunit VMs

Caused by:
java.lang.RuntimeException: VMs did not start up within 120 seconds

86 tests completed, 4 failed
:geode-lucene:distributedTest FAILED
:geode-lucene:flakyTest
:geode-lucene:integrationTest
:geode-old-client-support:assemble
:geode-old-client-support:compileTestJava
:geode-old-client-support:processTestResources UP-TO-DATE
:geode-old-client-support:testClasses
:geode-old-client-support:checkMissedTests
:geode-old-client-support:spotlessJavaCheck
:geode-old-client-support:spotlessCheck
:geode-old-client-support:test
:geode-old-client-support:check
:geode-old-client-support:build
:geode-old-client-support:distributedTest
:geode-old-client-support:flakyTest
:geode-old-client-support:integrationTest
:geode-old-versions:javadoc UP-TO-DATE
:geode-old-versions:javadocJar
:geode-old-versions:sourcesJar
:geode-old-versions:signArchives SKIPPED
:geode-old-versions:assemble
:geode-old-versions:compileTestJava UP-TO-DATE
:geode-old-versions:processTestResources UP-TO-DATE
:geode-old-versions:testClasses UP-TO-DATE
:geode-old-versions:checkMissedTests UP-TO-DATE
:geode-old-versions:spotlessJavaCheck
:geode-old-versions:spotlessCheck
:geode-old-versions:test UP-TO-DATE
:geode-old-versions:check
:geode-old-versions:build
:geode-old-versions:distributedTest UP-TO-DATE
:geode-old-versions:flakyTest UP-TO-DATE
:geode-old-versions:integrationTest UP-TO-DATE
:geode-pulse:assemble
:geode-pulse:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:geode-pulse:processTestResources
:geode-pulse:testClasses
:geode-pulse:checkMissedTests
:geode-pulse:sp

RE: are these gemfire bug fixes included in the last geode release?

2017-02-02 Thread Gal Palmery
Thanks.

-Original Message-
From: Anthony Baker [mailto:aba...@pivotal.io] 
Sent: Tuesday, January 31, 2017 17:22
To: dev@geode.apache.org
Subject: Re: are these gemfire bug fixes included in the last geode release?

Gal, I sent you a PM.  I don’t think the Geode community can help with this 
question.

Anthony

> On Jan 29, 2017, at 4:58 AM, Gal Palmery  wrote:
> 
> I'll edit the mail a bit (it was in a table when I sent it, but I see that 
> it's  not very clear) -->
> 
> Can anyone say if the fixes for the below Gemfire support tickets are 
> included in the last geode release:
> 1) 15611661002 - FIXED in GemFire 8.0.0.10 - gfsh fails to connect to locator 
> when executng a pre-configuration script, but returns 0
> 2) 15674334505 - fixed in GemFire 8.2 - NPE in map index when field of type 
> Map == null
> 3) 16972460504 - FIXED in GemFire 8.2.0.1 - Gemfire server disconnected after 
> large GC pause 
> 4) 16972415504 - FIXED in GemFire 8.2.0.7 - Different results in OQL queries 
> for the same entity (using GFSH): one query is for all entities in a region, 
> and the second is for the specific entity.
> 
> Thanks,
> Gal
> -Original Message-
> From: Gal Palmery 
> Sent: Thursday, January 26, 2017 16:07
> To: dev@geode.apache.org
> Subject: are these gemfire bug fixes included in the last geode release?
> 
> Hi All,
> 
> Can anyone say if the fixes for the below Gemfire support tickets are 
> included in the last geode release:
> 
> Support ticket
> 
> Issue
> 
> Status
> 
> Comment
> 
> 15611661002
> 
> gfsh fails to connect to locator when executng a pre-configuration script, 
> but returns 0
> 
> FIXED
> 
> FIXED in GemFire 8.0.0.10
> 
> 15674334505
> 
> NPE in map index when field of type Map == null
> 
> FIXED
> 
> fixed in GemFire 8.2
> 
> 16972460504
> 
> Gemfire server disconnected after large GC pause
> 
> FIXED
> 
> FIXED in GemFire 8.2.0.1
> 
> 16972415504
> 
> Different results in OQL queries for the same entity (using GFSH): one query 
> is for all entities in a region, and the second is for the specific entity.
> 
> FIXED
> 
> FIXED in GemFire 8.2.0.7
> 
> 
> Thanks,
> Gal
> This message and the information contained herein is proprietary and 
> confidential and subject to the Amdocs policy statement,
> 
> you may review at http://www.amdocs.com/email_disclaimer.asp
> This message and the information contained herein is proprietary and 
> confidential and subject to the Amdocs policy statement,
> 
> you may review at http://www.amdocs.com/email_disclaimer.asp
> 

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at http://www.amdocs.com/email_disclaimer.asp


Build failed in Jenkins: Geode-nightly #735

2017-02-02 Thread Apache Jenkins Server
See 

Changes:

[Anil] GEODE-1672: Disabled recovering values for LRU region during startup.

[ukohlmeyer] GEODE-2329: Refactor JSONFormatter fromJSON code to reduce 
duplicate

[ukohlmeyer] GEODE-2329: Cleanup test code and fix Classcast issue

[ukohlmeyer] GEODE-2329: spotless

[ukohlmeyer] GEODE-2329: Fixed code from code review

[Anil] Applying spotless on tests added for GEODE-1672

[upthewaterspout] Updating javadocs for FunctionContext to indicate scope of 
ResultSender

[upthewaterspout] GEODE-2386: Wait until classpath doesn't contain 
gradle-worker.jar

[upthewaterspout] Modified InstallerJUnitTest to run under windows

--
[...truncated 549 lines...]
at 
org.apache.geode.distributed.LocatorDUnitTest.lambda$testSSLEnabledLocatorDiesWhenConnectingToNonSSLLocator$bb17a952$1(LocatorDUnitTest.java:535)

org.apache.geode.distributed.LocatorUDPSecurityDUnitTest > 
testSSLEnabledLocatorDiesWhenConnectingToNonSSLLocator FAILED
org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.distributed.LocatorDUnitTest$$Lambda$120/198311082.run in VM 2 
running on Host asf920.gq1.ygridcore.net with 5 VMs

Caused by:
com.jayway.awaitility.core.ConditionTimeoutException: Condition with 
alias 'locator2 dies' didn't complete within 30 seconds because condition with 
org.apache.geode.distributed.LocatorDUnitTest was not fulfilled.

6779 tests completed, 2 failed, 599 skipped
:geode-core:distributedTest FAILED
:geode-core:flakyTest
:geode-core:integrationTest
:geode-cq:assemble
:geode-cq:compileTestJavaNote: Some input files use or override a deprecated 
API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:geode-cq:processTestResources
:geode-cq:testClasses
:geode-cq:checkMissedTests
:geode-cq:spotlessJavaCheck
:geode-cq:spotlessCheck
:geode-cq:test
:geode-cq:check
:geode-cq:build
:geode-cq:distributedTest
:geode-cq:flakyTest
:geode-cq:integrationTest
:geode-json:assemble
:geode-json:compileTestJava UP-TO-DATE
:geode-json:processTestResources UP-TO-DATE
:geode-json:testClasses UP-TO-DATE
:geode-json:checkMissedTests UP-TO-DATE
:geode-json:spotlessJavaCheck
:geode-json:spotlessCheck
:geode-json:test UP-TO-DATE
:geode-json:check
:geode-json:build
:geode-json:distributedTest UP-TO-DATE
:geode-json:flakyTest UP-TO-DATE
:geode-json:integrationTest UP-TO-DATE
:geode-junit:javadoc
:geode-junit:javadocJar
:geode-junit:sourcesJar
:geode-junit:signArchives SKIPPED
:geode-junit:assemble
:geode-junit:compileTestJava
:geode-junit:processTestResources UP-TO-DATE
:geode-junit:testClasses
:geode-junit:checkMissedTests
:geode-junit:spotlessJavaCheck
:geode-junit:spotlessCheck
:geode-junit:test
:geode-junit:check
:geode-junit:build
:geode-junit:distributedTest
:geode-junit:flakyTest
:geode-junit:integrationTest
:geode-lucene:assemble
:geode-lucene:compileTestJavaNote: Some input files use or override a 
deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:geode-lucene:processTestResources
:geode-lucene:testClasses
:geode-lucene:checkMissedTests
:geode-lucene:spotlessJavaCheck
:geode-lucene:spotlessCheck
:geode-lucene:test
:geode-lucene:check
:geode-lucene:build
:geode-lucene:distributedTest
:geode-lucene:flakyTest
:geode-lucene:integrationTest
:geode-old-client-support:assemble
:geode-old-client-support:compileTestJava
:geode-old-client-support:processTestResources UP-TO-DATE
:geode-old-client-support:testClasses
:geode-old-client-support:checkMissedTests
:geode-old-client-support:spotlessJavaCheck
:geode-old-client-support:spotlessCheck
:geode-old-client-support:test
:geode-old-client-support:check
:geode-old-client-support:build
:geode-old-client-support:distributedTest
:geode-old-client-support:flakyTest
:geode-old-client-support:integrationTest
:geode-old-versions:javadoc UP-TO-DATE
:geode-old-versions:javadocJar
:geode-old-versions:sourcesJar
:geode-old-versions:signArchives SKIPPED
:geode-old-versions:assemble
:geode-old-versions:compileTestJava UP-TO-DATE
:geode-old-versions:processTestResources UP-TO-DATE
:geode-old-versions:testClasses UP-TO-DATE
:geode-old-versions:checkMissedTests UP-TO-DATE
:geode-old-versions:spotlessJavaCheck
:geode-old-versions:spotlessCheck
:geode-old-versions:test UP-TO-DATE
:geode-old-versions:check
:geode-old-versions:build
:geode-old-versions:distributedTest UP-TO-DATE
:geode-old-versions:flakyTest UP-TO-DATE
:geode-old-versions:integrationTest UP-TO-DATE
:geode-pulse:assemble
:geode-pulse:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation fo

Re: [DISCUSS] Release branch for 1.1.0

2017-02-02 Thread Anthony Baker
Looks like the tests are failing even with your changes.  They pass for me when 
run locally.  Perhaps it’s time to look into dockerizing the jenkins 
environment.

IMO, we can move forward with 1.1.0 since these failures seem to be related to 
the Jenkins environment.  Thoughts?

Anthony

> On Feb 1, 2017, at 2:34 PM, Dan Smith  wrote:
> 
> I checked in what I hope is a workaround for GEODE-2386. We'll see what
> happens when the nightly build runs. It doesn't seem to reproduce in other
> environments.
> 
> -Dan
> 
> On Wed, Feb 1, 2017 at 1:34 PM, Anthony Baker  wrote:
> 
>> If it doesn’t need to be fixed in 1.1.0, please unset the ‘Fix Version’ in
>> JIRA.
>> 
>> Thanks,
>> Anthony
>> 
>> 
>>> On Feb 1, 2017, at 9:53 AM, Kevin Duling  wrote:
>>> 
>>> GEODE-2247 GFSH over HTTP succeeds without authentication
>>> 
>>> The title for this is a little misleading.  Yes, it succeeds, but with an
>>> 'anonymous' and unprivileged user.  That could be a valid use-case.  For
>>> example, dev rest does not require a login to execute a ping.
>>> 
>>> I hope to have it resolved today, but in my opinion, it's not critical
>>> enough to hold up a release.
>>> 
>>> On Wed, Feb 1, 2017 at 9:46 AM, Anthony Baker  wrote:
>>> 
 While we’re finalizing the last fixes, let’s crowd-source the release
 notes.  These will be linked from the releases page on the website and
 included in the ANNOUNCE email.  You can edit the release notes here:
 
 https://cwiki.apache.org/confluence/display/GEODE/Release+Notes
 
 Thanks,
 Anthony
 
> On Jan 31, 2017, at 3:40 PM, Hitesh Khamesra
>> 
 wrote:
> 
> Update: We are waiting for two more fixes.
> GEODE-2386 Unable to launch dunit VMs in nightly builds
> GEODE-2247 GFSH over HTTP succeeds without authentication
> Thanks,Hitesh
> 
> 
>From: Hitesh Khamesra 
> To: "dev@geode.apache.org" 
> Sent: Monday, January 30, 2017 2:55 PM
> Subject: Re: [DISCUSS] Release branch for 1.1.0
> 
> Thanks Bruce. There is one more ticket GEODE-2386, which Kirk is
>> looking
 into it.
> -Hitesh
> 
> 
> From: Bruce Schuchardt 
> To: dev@geode.apache.org
> Sent: Monday, January 30, 2017 2:21 PM
> Subject: Re: [DISCUSS] Release branch for 1.1.0
> 
> I'm done merging these two changes to release/1.1.0
> 
> Le 1/30/2017 à 10:58 AM, Hitesh Khamesra a écrit :
>> Sure. Lets include GEODE-2368 (Need to fix log message in
 DirectChannel) this as well.
>> Thanks.Hitesh
>> 
>> 
>>   From: Bruce Schuchardt 
>> To: dev@geode.apache.org
>> Sent: Monday, January 30, 2017 10:37 AM
>> Subject: Re: [DISCUSS] Release branch for 1.1.0
>> 
>> I'd like to merge into 1.1.0 the change to the Host test class that I
>> checked into develop today.  It's breaking things for some people, so
>> it
>> would be nice to have in the 1.1.0 branch.
>> 
>> Le 1/27/2017 à 11:00 AM, Hitesh Khamesra a écrit :
>>> I have created the release branch "release/1.1.0". Please look at
>> this
 and raise if there is any issue.
>>> If there is no concern then we will start voting next week.
>>> Thanks.Hitesh
>>> 
>>> 
>> 
>> 
>> 
> 
> 
> 
> 
> 
 
 
>> 
>> 



[jira] [Commented] (GEODE-2402) CI Failure: LuceneQueriesPeerFixedPRDUnitTest.returnCorrectResultsWhenRebalanceHappensOnIndexUpdate

2017-02-02 Thread Dan Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850249#comment-15850249
 ] 

Dan Smith commented on GEODE-2402:
--

I don't think this is the same as GEODE-2401. The test is failing because the 
queue does not flush. But I only see this error in the logs, which seems to be 
a fixed PR issue. It reproduced 1 out of 50 times when I was running it:

{noformat}
[vm0] [info 2017/02/01 17:44:19.838 PST  tid=0x11e] Exception occurred 
while processing  
UpdateOperation(EntryEventImpl[op=CREATE;region=/__PR/_B__index#__region.chunks_1;key=org.apache.geode.cache.lucene.internal.filesystem.ChunkKey@6144be8a;oldValue=null;newValue=(63,-41,108,23,28,76,117,99,101,110,101,53,48,83,116,111,114,101,100,70,105,101,108,100,115,70,97,115,116,68,97,116,97,0,0,0,1,-101,114,0,-66,-32,33,31,62,-68,-51,-122,-111,35,112,-122,-2,0,-128,-128,1,2);callbackArg=null;originRemote=false;originMember=172.16.115.242(18677):32770;version={v1;
 rv2; time=1485999859832};id=EventID[threadID=10;sequenceID=6]])
[vm0] java.lang.IllegalStateException: For FixedPartitionedRegion 
"index#_region.chunks", FixedPartitionResolver is not available (neither 
through the partition attribute partition-resolver nor key/callbackArg 
implementing FixedPartitionResolver)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.PartitionedRegionHelper.getHashKey(PartitionedRegionHelper.java:571)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.PartitionedRegionHelper.getHashKey(PartitionedRegionHelper.java:501)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.PartitionedRegion.getKeyInfo(PartitionedRegion.java:9848)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.EntryEventImpl.(EntryEventImpl.java:225)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.EntryEventImpl.create(EntryEventImpl.java:377)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.partitioned.PutMessage.operateOnPartitionedRegion(PutMessage.java:695)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:342)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:434)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.DistributionManager.scheduleIncomingMessage(DistributionManager.java:3504)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.DistributionManager.handleIncomingDMsg(DistributionManager.java:3137)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.DistributionManager$MyListener.messageReceived(DistributionManager.java:4311)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1115)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:1039)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:407)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:715)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:877)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:4033)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.tcp.Connection.processNIOBuffer(Connection.java:3615)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.tcp.Connection.runNioReader(Connection.java:1865)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
org.apache.geode.internal.tcp.Connection.run(Connection.java:1726)
[vm0]   at Remote Member '172.16.115.242(18682):32772' in 
java.lang.Thread.run(Thread.java:745)
[vm0]   at 
org.apache.geode.distributed.internal.ReplyException.handleAsUnexpected(ReplyException.java:85)
[vm0]   at 
org.apache.geode.internal.cache.DistributedCacheOperation.waitForAckIfNeeded(DistributedCacheOperation.java:741)
[vm0]   at 
org.

[jira] [Updated] (GEODE-2413) peer-to-peer authentication: Peer need to re-authenticate coordinator while accepting view message

2017-02-02 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra updated GEODE-2413:
---
Summary: peer-to-peer authentication: Peer need to re-authenticate 
coordinator while accepting view message  (was: peer-to-peer authentication: 
Peer need to re-authenticate view message)

> peer-to-peer authentication: Peer need to re-authenticate coordinator while 
> accepting view message
> --
>
> Key: GEODE-2413
> URL: https://issues.apache.org/jira/browse/GEODE-2413
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Hitesh Khamesra
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Release branch for 1.1.0

2017-02-02 Thread Hitesh Khamesra
We want to fix one more issue in this release
 GEODE-2413 peer-to-peer authentication: Peer need to re-authenticate 
coordinator while accepting view message 

GEODE-2386    Unable to launch dunit VMs in nightly builds
>>IMO, we can move forward with 1.1.0 since these failures seem to be related 
>>to the Jenkins environment.  Thoughts?
+1

Thanks.Hitesh



  From: Anthony Baker 
 To: dev@geode.apache.org 
 Sent: Thursday, February 2, 2017 9:16 AM
 Subject: Re: [DISCUSS] Release branch for 1.1.0
   
Looks like the tests are failing even with your changes.  They pass for me when 
run locally.  Perhaps it’s time to look into dockerizing the jenkins 
environment.

IMO, we can move forward with 1.1.0 since these failures seem to be related to 
the Jenkins environment.  Thoughts?

Anthony

> On Feb 1, 2017, at 2:34 PM, Dan Smith  wrote:
> 
> I checked in what I hope is a workaround for GEODE-2386. We'll see what
> happens when the nightly build runs. It doesn't seem to reproduce in other
> environments.
> 
> -Dan
> 
> On Wed, Feb 1, 2017 at 1:34 PM, Anthony Baker  wrote:
> 
>> If it doesn’t need to be fixed in 1.1.0, please unset the ‘Fix Version’ in
>> JIRA.
>> 
>> Thanks,
>> Anthony
>> 
>> 
>>> On Feb 1, 2017, at 9:53 AM, Kevin Duling  wrote:
>>> 
>>> GEODE-2247 GFSH over HTTP succeeds without authentication
>>> 
>>> The title for this is a little misleading.  Yes, it succeeds, but with an
>>> 'anonymous' and unprivileged user.  That could be a valid use-case.  For
>>> example, dev rest does not require a login to execute a ping.
>>> 
>>> I hope to have it resolved today, but in my opinion, it's not critical
>>> enough to hold up a release.
>>> 
>>> On Wed, Feb 1, 2017 at 9:46 AM, Anthony Baker  wrote:
>>> 
 While we’re finalizing the last fixes, let’s crowd-source the release
 notes.  These will be linked from the releases page on the website and
 included in the ANNOUNCE email.  You can edit the release notes here:
 
 https://cwiki.apache.org/confluence/display/GEODE/Release+Notes
 
 Thanks,
 Anthony
 
> On Jan 31, 2017, at 3:40 PM, Hitesh Khamesra
>> 
 wrote:
> 
> Update: We are waiting for two more fixes.
> GEODE-2386 Unable to launch dunit VMs in nightly builds
> GEODE-2247 GFSH over HTTP succeeds without authentication
> Thanks,Hitesh
> 
> 
>    From: Hitesh Khamesra 
> To: "dev@geode.apache.org" 
> Sent: Monday, January 30, 2017 2:55 PM
> Subject: Re: [DISCUSS] Release branch for 1.1.0
> 
> Thanks Bruce. There is one more ticket GEODE-2386, which Kirk is
>> looking
 into it.
> -Hitesh
> 
> 
>    From: Bruce Schuchardt 
> To: dev@geode.apache.org
> Sent: Monday, January 30, 2017 2:21 PM
> Subject: Re: [DISCUSS] Release branch for 1.1.0
> 
> I'm done merging these two changes to release/1.1.0
> 
> Le 1/30/2017 à 10:58 AM, Hitesh Khamesra a écrit :
>> Sure. Lets include GEODE-2368 (Need to fix log message in
 DirectChannel) this as well.
>> Thanks.Hitesh
>> 
>> 
>>      From: Bruce Schuchardt 
>> To: dev@geode.apache.org
>> Sent: Monday, January 30, 2017 10:37 AM
>> Subject: Re: [DISCUSS] Release branch for 1.1.0
>> 
>> I'd like to merge into 1.1.0 the change to the Host test class that I
>> checked into develop today.  It's breaking things for some people, so
>> it
>> would be nice to have in the 1.1.0 branch.
>> 
>> Le 1/27/2017 à 11:00 AM, Hitesh Khamesra a écrit :
>>> I have created the release branch "release/1.1.0". Please look at
>> this
 and raise if there is any issue.
>>> If there is no concern then we will start voting next week.
>>> Thanks.Hitesh
>>> 
>>> 
>> 
>> 
>> 
> 
> 
> 
> 
> 
 
 
>> 
>> 


   

GEODE-2413

2017-02-02 Thread Kirk Lund
Can you please add a Description? It's hard to figure out what this ticket
is actually about.

Thanks,
Kirk


gfsh over http & authentication

2017-02-02 Thread Kevin Duling
It's been reported in GEODE-2247 that gfsh can connect in a secured
environment without a username/password when using the --use-http flag.
When using a jmx connection, this would immediately prompt for
user/password.

In the http environment, the connection isn't any less secure.  The moment
one attempts to execute a command that an "anonymous user" cannot execute,
they will receive a failure with a message informing them that the user (in
this case anonymous) cannot execute that command.  That's all fine and
good, but the UX should probably be to fail instead on the 'connect' when
in a secure environment.

Opinions?

The issue is that gfsh uses the 'ping' endpoint to determine connectivity,
which is not secured.  Moreover, it starts a connection poll, hitting that
endpoint every 500ms to ensure the connection is still alive.  I can't
determine why it's doing this other than to try to wrap an artificial
'state' in to the stateless nature of REST.  The only advantage I see is
that if I kill my server, gfsh knows right away that it's been disconnected
from it.

I have not yet determined whether or not the socket stays open through all
of this.  I suspect that it does or otherwise I'd see a lot of FIN_WAIT
entries in my netstat results.

One possible solution to this is to implement security in the endpoint.
But ShellCommandsContoller.java doesn't have any security in it.  Security
is handled further downstream.


Re: gfsh over http & authentication

2017-02-02 Thread Anthony Baker
Seems odd to me that the ‘connect’ command is where the credentials are 
supplied but the failures are only realized when invoking a secure command.  So 
I would need to go back and disconnect / reconnect to fix a password typo.

As a reference point, does ‘connect’ over JMX surface authentication errors?

Anthony

> On Feb 2, 2017, at 10:37 AM, Kevin Duling  wrote:
> 
> It's been reported in GEODE-2247 that gfsh can connect in a secured
> environment without a username/password when using the --use-http flag.
> When using a jmx connection, this would immediately prompt for
> user/password.
> 
> In the http environment, the connection isn't any less secure.  The moment
> one attempts to execute a command that an "anonymous user" cannot execute,
> they will receive a failure with a message informing them that the user (in
> this case anonymous) cannot execute that command.  That's all fine and
> good, but the UX should probably be to fail instead on the 'connect' when
> in a secure environment.
> 
> Opinions?
> 
> The issue is that gfsh uses the 'ping' endpoint to determine connectivity,
> which is not secured.  Moreover, it starts a connection poll, hitting that
> endpoint every 500ms to ensure the connection is still alive.  I can't
> determine why it's doing this other than to try to wrap an artificial
> 'state' in to the stateless nature of REST.  The only advantage I see is
> that if I kill my server, gfsh knows right away that it's been disconnected
> from it.
> 
> I have not yet determined whether or not the socket stays open through all
> of this.  I suspect that it does or otherwise I'd see a lot of FIN_WAIT
> entries in my netstat results.
> 
> One possible solution to this is to implement security in the endpoint.
> But ShellCommandsContoller.java doesn't have any security in it.  Security
> is handled further downstream.



Volunteer For Creating February Board Report

2017-02-02 Thread Mark Bretl
Hi,

Its that time again for us to create a board report for the February, the
15th, board meeting, are there any volunteers for creating a draft?

Initial report should be submitted next wednesday, February 8th. We can
edit the report until Friday the 10th, which then it is customary for board
members to start reviewing reports.

Template report can be found at:
https://cwiki.apache.org/confluence/display/GEODE/ASF+Board+Report+Template

Best Regards,

--Mark


[jira] [Updated] (GEODE-2413) peer-to-peer authentication: Peer need to re-authenticate coordinator while accepting view message

2017-02-02 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra updated GEODE-2413:
---
Description: In peer-to-peer authentication, coordinator authenticates the 
joining member. Then coordinator includes that new member in the cluster and 
sends new view message. This view message should include coordinator's 
credential so that joining member can authenticate coordinator as well (i.e. 
mutual authentication)

> peer-to-peer authentication: Peer need to re-authenticate coordinator while 
> accepting view message
> --
>
> Key: GEODE-2413
> URL: https://issues.apache.org/jira/browse/GEODE-2413
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Hitesh Khamesra
> Fix For: 1.1.0
>
>
> In peer-to-peer authentication, coordinator authenticates the joining member. 
> Then coordinator includes that new member in the cluster and sends new view 
> message. This view message should include coordinator's credential so that 
> joining member can authenticate coordinator as well (i.e. mutual 
> authentication)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: GEODE-2413

2017-02-02 Thread Hitesh Khamesra
Added description based on my understanding.   
-Hitesh

  From: Kirk Lund 
 To: Hitesh Khamesra ; geode  
 Sent: Thursday, February 2, 2017 10:35 AM
 Subject: GEODE-2413
   
Can you please add a Description? It's hard to figure out what this ticket
is actually about.

Thanks,
Kirk


   

Re: gfsh over http & authentication

2017-02-02 Thread Kevin Duling
Yes it does, immediately on the connect.  So the behavior is different.

On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker  wrote:

> Seems odd to me that the ‘connect’ command is where the credentials are
> supplied but the failures are only realized when invoking a secure
> command.  So I would need to go back and disconnect / reconnect to fix a
> password typo.
>
> As a reference point, does ‘connect’ over JMX surface authentication
> errors?
>
> Anthony
>
> > On Feb 2, 2017, at 10:37 AM, Kevin Duling  wrote:
> >
> > It's been reported in GEODE-2247 that gfsh can connect in a secured
> > environment without a username/password when using the --use-http flag.
> > When using a jmx connection, this would immediately prompt for
> > user/password.
> >
> > In the http environment, the connection isn't any less secure.  The
> moment
> > one attempts to execute a command that an "anonymous user" cannot
> execute,
> > they will receive a failure with a message informing them that the user
> (in
> > this case anonymous) cannot execute that command.  That's all fine and
> > good, but the UX should probably be to fail instead on the 'connect' when
> > in a secure environment.
> >
> > Opinions?
> >
> > The issue is that gfsh uses the 'ping' endpoint to determine
> connectivity,
> > which is not secured.  Moreover, it starts a connection poll, hitting
> that
> > endpoint every 500ms to ensure the connection is still alive.  I can't
> > determine why it's doing this other than to try to wrap an artificial
> > 'state' in to the stateless nature of REST.  The only advantage I see is
> > that if I kill my server, gfsh knows right away that it's been
> disconnected
> > from it.
> >
> > I have not yet determined whether or not the socket stays open through
> all
> > of this.  I suspect that it does or otherwise I'd see a lot of FIN_WAIT
> > entries in my netstat results.
> >
> > One possible solution to this is to implement security in the endpoint.
> > But ShellCommandsContoller.java doesn't have any security in it.
> Security
> > is handled further downstream.
>
>


[GitHub] geode pull request #382: GEODE-2414: mark a failing test as flaky.

2017-02-02 Thread galen-pivotal
GitHub user galen-pivotal opened a pull request:

https://github.com/apache/geode/pull/382

GEODE-2414: mark a failing test as flaky.

This is just a temporary fix for CI until we can diagnose the issue.

@kohlmu-pivotal @hiteshk25 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/galen-pivotal/incubator-geode 
feature/GEODE-2412

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/382.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #382


commit 9ffd1933c8110c274388be7d53d6a419ff5c0740
Author: Galen O'Sullivan 
Date:   2017-02-02T19:32:24Z

GEODE-2414: mark a failing test as flaky.

This is just a temporary fix for CI until we can diagnose the issue.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: gfsh over http & authentication

2017-02-02 Thread John Blum
Back in the day, I introduced this "polling thread" to determine whether
*Gfsh* was still connected, since as you say, in a HTTP "stateless"
environment and in the absence of a "persistent" connection, it otherwise
does not know.

So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I needed
to poll the Manager.  That way when the Manager was no longer available, it
would display that *Gfsh* was no longer connected AND that the commands
that "require a connection" (e.g. `list region`) were no longer
available... again preserving the existing behavior in HTTP mode.

Security (basic auth) had not been implemented in *Gfsh* at that time when
I created the Management REST API (or rather, it is more accurate to say...
REST-like; it's not a true REST-ful interface to be precise, which is one
reason it never was made public for users to consume, though it could have
been, providing we introduce the proper notion of  REST-ful resources
abstractions and change the endpoints (URIs) appropriately; anyway...).

-j


On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling  wrote:

> Yes it does, immediately on the connect.  So the behavior is different.
>
> On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker  wrote:
>
> > Seems odd to me that the ‘connect’ command is where the credentials are
> > supplied but the failures are only realized when invoking a secure
> > command.  So I would need to go back and disconnect / reconnect to fix a
> > password typo.
> >
> > As a reference point, does ‘connect’ over JMX surface authentication
> > errors?
> >
> > Anthony
> >
> > > On Feb 2, 2017, at 10:37 AM, Kevin Duling  wrote:
> > >
> > > It's been reported in GEODE-2247 that gfsh can connect in a secured
> > > environment without a username/password when using the --use-http flag.
> > > When using a jmx connection, this would immediately prompt for
> > > user/password.
> > >
> > > In the http environment, the connection isn't any less secure.  The
> > moment
> > > one attempts to execute a command that an "anonymous user" cannot
> > execute,
> > > they will receive a failure with a message informing them that the user
> > (in
> > > this case anonymous) cannot execute that command.  That's all fine and
> > > good, but the UX should probably be to fail instead on the 'connect'
> when
> > > in a secure environment.
> > >
> > > Opinions?
> > >
> > > The issue is that gfsh uses the 'ping' endpoint to determine
> > connectivity,
> > > which is not secured.  Moreover, it starts a connection poll, hitting
> > that
> > > endpoint every 500ms to ensure the connection is still alive.  I can't
> > > determine why it's doing this other than to try to wrap an artificial
> > > 'state' in to the stateless nature of REST.  The only advantage I see
> is
> > > that if I kill my server, gfsh knows right away that it's been
> > disconnected
> > > from it.
> > >
> > > I have not yet determined whether or not the socket stays open through
> > all
> > > of this.  I suspect that it does or otherwise I'd see a lot of FIN_WAIT
> > > entries in my netstat results.
> > >
> > > One possible solution to this is to implement security in the endpoint.
> > > But ShellCommandsContoller.java doesn't have any security in it.
> > Security
> > > is handled further downstream.
> >
> >
>



-- 
-John
john.blum10101 (skype)


Re: gfsh over http & authentication

2017-02-02 Thread Kevin Duling
Good to know some history on it.  The connection probably doesn't need to
poll every 500ms.  I would think 2 seconds or even 5 seconds would be
sufficient in the general case.

If we make ping require authentication, it may resolve the issue.  But I'm
not sure that's the right thing to do.  We could create a 'ping2' endpoint
(with some better name that I cannot currently think of) that does require
auth for this thread to validate the connection.

On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:

> Back in the day, I introduced this "polling thread" to determine whether
> *Gfsh* was still connected, since as you say, in a HTTP "stateless"
> environment and in the absence of a "persistent" connection, it otherwise
> does not know.
>
> So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I needed
> to poll the Manager.  That way when the Manager was no longer available, it
> would display that *Gfsh* was no longer connected AND that the commands
> that "require a connection" (e.g. `list region`) were no longer
> available... again preserving the existing behavior in HTTP mode.
>
> Security (basic auth) had not been implemented in *Gfsh* at that time when
> I created the Management REST API (or rather, it is more accurate to say...
> REST-like; it's not a true REST-ful interface to be precise, which is one
> reason it never was made public for users to consume, though it could have
> been, providing we introduce the proper notion of  REST-ful resources
> abstractions and change the endpoints (URIs) appropriately; anyway...).
>
> -j
>
>
> On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling  wrote:
>
> > Yes it does, immediately on the connect.  So the behavior is different.
> >
> > On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 
> wrote:
> >
> > > Seems odd to me that the ‘connect’ command is where the credentials are
> > > supplied but the failures are only realized when invoking a secure
> > > command.  So I would need to go back and disconnect / reconnect to fix
> a
> > > password typo.
> > >
> > > As a reference point, does ‘connect’ over JMX surface authentication
> > > errors?
> > >
> > > Anthony
> > >
> > > > On Feb 2, 2017, at 10:37 AM, Kevin Duling 
> wrote:
> > > >
> > > > It's been reported in GEODE-2247 that gfsh can connect in a secured
> > > > environment without a username/password when using the --use-http
> flag.
> > > > When using a jmx connection, this would immediately prompt for
> > > > user/password.
> > > >
> > > > In the http environment, the connection isn't any less secure.  The
> > > moment
> > > > one attempts to execute a command that an "anonymous user" cannot
> > > execute,
> > > > they will receive a failure with a message informing them that the
> user
> > > (in
> > > > this case anonymous) cannot execute that command.  That's all fine
> and
> > > > good, but the UX should probably be to fail instead on the 'connect'
> > when
> > > > in a secure environment.
> > > >
> > > > Opinions?
> > > >
> > > > The issue is that gfsh uses the 'ping' endpoint to determine
> > > connectivity,
> > > > which is not secured.  Moreover, it starts a connection poll, hitting
> > > that
> > > > endpoint every 500ms to ensure the connection is still alive.  I
> can't
> > > > determine why it's doing this other than to try to wrap an artificial
> > > > 'state' in to the stateless nature of REST.  The only advantage I see
> > is
> > > > that if I kill my server, gfsh knows right away that it's been
> > > disconnected
> > > > from it.
> > > >
> > > > I have not yet determined whether or not the socket stays open
> through
> > > all
> > > > of this.  I suspect that it does or otherwise I'd see a lot of
> FIN_WAIT
> > > > entries in my netstat results.
> > > >
> > > > One possible solution to this is to implement security in the
> endpoint.
> > > > But ShellCommandsContoller.java doesn't have any security in it.
> > > Security
> > > > is handled further downstream.
> > >
> > >
> >
>
>
>
> --
> -John
> john.blum10101 (skype)
>


Re: gfsh over http & authentication

2017-02-02 Thread John Blum
> The connection probably doesn't need to poll every 500ms.

500 ms provided a good (nearly consistent) UX for the user to know almost
instantly that the Manager went away, like the JMX counterpart.  2s is
arguable; 5s is probably too long as the user could already be typing
another command that is not available.  In that case they might get another
kind of error (don't recall for sure).  Anyway, food for thought.

If another endpoint is needed (though I cannot imagine why) perhaps `
securePing()` would be more descriptive and still offer up an alternative
route.



On Thu, Feb 2, 2017 at 11:55 AM, Kevin Duling  wrote:

> Good to know some history on it.  The connection probably doesn't need to
> poll every 500ms.  I would think 2 seconds or even 5 seconds would be
> sufficient in the general case.
>
> If we make ping require authentication, it may resolve the issue.  But I'm
> not sure that's the right thing to do.  We could create a 'ping2' endpoint
> (with some better name that I cannot currently think of) that does require
> auth for this thread to validate the connection.
>
> On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:
>
> > Back in the day, I introduced this "polling thread" to determine whether
> > *Gfsh* was still connected, since as you say, in a HTTP "stateless"
> > environment and in the absence of a "persistent" connection, it otherwise
> > does not know.
> >
> > So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I
> needed
> > to poll the Manager.  That way when the Manager was no longer available,
> it
> > would display that *Gfsh* was no longer connected AND that the commands
> > that "require a connection" (e.g. `list region`) were no longer
> > available... again preserving the existing behavior in HTTP mode.
> >
> > Security (basic auth) had not been implemented in *Gfsh* at that time
> when
> > I created the Management REST API (or rather, it is more accurate to
> say...
> > REST-like; it's not a true REST-ful interface to be precise, which is one
> > reason it never was made public for users to consume, though it could
> have
> > been, providing we introduce the proper notion of  REST-ful resources
> > abstractions and change the endpoints (URIs) appropriately; anyway...).
> >
> > -j
> >
> >
> > On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling 
> wrote:
> >
> > > Yes it does, immediately on the connect.  So the behavior is different.
> > >
> > > On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 
> > wrote:
> > >
> > > > Seems odd to me that the ‘connect’ command is where the credentials
> are
> > > > supplied but the failures are only realized when invoking a secure
> > > > command.  So I would need to go back and disconnect / reconnect to
> fix
> > a
> > > > password typo.
> > > >
> > > > As a reference point, does ‘connect’ over JMX surface authentication
> > > > errors?
> > > >
> > > > Anthony
> > > >
> > > > > On Feb 2, 2017, at 10:37 AM, Kevin Duling 
> > wrote:
> > > > >
> > > > > It's been reported in GEODE-2247 that gfsh can connect in a secured
> > > > > environment without a username/password when using the --use-http
> > flag.
> > > > > When using a jmx connection, this would immediately prompt for
> > > > > user/password.
> > > > >
> > > > > In the http environment, the connection isn't any less secure.  The
> > > > moment
> > > > > one attempts to execute a command that an "anonymous user" cannot
> > > > execute,
> > > > > they will receive a failure with a message informing them that the
> > user
> > > > (in
> > > > > this case anonymous) cannot execute that command.  That's all fine
> > and
> > > > > good, but the UX should probably be to fail instead on the
> 'connect'
> > > when
> > > > > in a secure environment.
> > > > >
> > > > > Opinions?
> > > > >
> > > > > The issue is that gfsh uses the 'ping' endpoint to determine
> > > > connectivity,
> > > > > which is not secured.  Moreover, it starts a connection poll,
> hitting
> > > > that
> > > > > endpoint every 500ms to ensure the connection is still alive.  I
> > can't
> > > > > determine why it's doing this other than to try to wrap an
> artificial
> > > > > 'state' in to the stateless nature of REST.  The only advantage I
> see
> > > is
> > > > > that if I kill my server, gfsh knows right away that it's been
> > > > disconnected
> > > > > from it.
> > > > >
> > > > > I have not yet determined whether or not the socket stays open
> > through
> > > > all
> > > > > of this.  I suspect that it does or otherwise I'd see a lot of
> > FIN_WAIT
> > > > > entries in my netstat results.
> > > > >
> > > > > One possible solution to this is to implement security in the
> > endpoint.
> > > > > But ShellCommandsContoller.java doesn't have any security in it.
> > > > Security
> > > > > is handled further downstream.
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -John
> > john.blum10101 (skype)
> >
>



-- 
-John
john.blum10101 (skype)


Re: gfsh over http & authentication

2017-02-02 Thread Michael William Dodge
A rule I've heard for UX is that anything over 200 ms is noticeable and 
anything over 2 s is slow. Unless polling every 500 ms is causing problems, it 
might be best to leave it at 500 ms as a decent compromise between efficiency 
and responsiveness.

Sarge

> On 2 Feb, 2017, at 12:04, John Blum  wrote:
> 
>> The connection probably doesn't need to poll every 500ms.
> 
> 500 ms provided a good (nearly consistent) UX for the user to know almost
> instantly that the Manager went away, like the JMX counterpart.  2s is
> arguable; 5s is probably too long as the user could already be typing
> another command that is not available.  In that case they might get another
> kind of error (don't recall for sure).  Anyway, food for thought.
> 
> If another endpoint is needed (though I cannot imagine why) perhaps `
> securePing()` would be more descriptive and still offer up an alternative
> route.
> 
> 
> 
> On Thu, Feb 2, 2017 at 11:55 AM, Kevin Duling  wrote:
> 
>> Good to know some history on it.  The connection probably doesn't need to
>> poll every 500ms.  I would think 2 seconds or even 5 seconds would be
>> sufficient in the general case.
>> 
>> If we make ping require authentication, it may resolve the issue.  But I'm
>> not sure that's the right thing to do.  We could create a 'ping2' endpoint
>> (with some better name that I cannot currently think of) that does require
>> auth for this thread to validate the connection.
>> 
>> On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:
>> 
>>> Back in the day, I introduced this "polling thread" to determine whether
>>> *Gfsh* was still connected, since as you say, in a HTTP "stateless"
>>> environment and in the absence of a "persistent" connection, it otherwise
>>> does not know.
>>> 
>>> So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I
>> needed
>>> to poll the Manager.  That way when the Manager was no longer available,
>> it
>>> would display that *Gfsh* was no longer connected AND that the commands
>>> that "require a connection" (e.g. `list region`) were no longer
>>> available... again preserving the existing behavior in HTTP mode.
>>> 
>>> Security (basic auth) had not been implemented in *Gfsh* at that time
>> when
>>> I created the Management REST API (or rather, it is more accurate to
>> say...
>>> REST-like; it's not a true REST-ful interface to be precise, which is one
>>> reason it never was made public for users to consume, though it could
>> have
>>> been, providing we introduce the proper notion of  REST-ful resources
>>> abstractions and change the endpoints (URIs) appropriately; anyway...).
>>> 
>>> -j
>>> 
>>> 
>>> On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling 
>> wrote:
>>> 
 Yes it does, immediately on the connect.  So the behavior is different.
 
 On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 
>>> wrote:
 
> Seems odd to me that the ‘connect’ command is where the credentials
>> are
> supplied but the failures are only realized when invoking a secure
> command.  So I would need to go back and disconnect / reconnect to
>> fix
>>> a
> password typo.
> 
> As a reference point, does ‘connect’ over JMX surface authentication
> errors?
> 
> Anthony
> 
>> On Feb 2, 2017, at 10:37 AM, Kevin Duling 
>>> wrote:
>> 
>> It's been reported in GEODE-2247 that gfsh can connect in a secured
>> environment without a username/password when using the --use-http
>>> flag.
>> When using a jmx connection, this would immediately prompt for
>> user/password.
>> 
>> In the http environment, the connection isn't any less secure.  The
> moment
>> one attempts to execute a command that an "anonymous user" cannot
> execute,
>> they will receive a failure with a message informing them that the
>>> user
> (in
>> this case anonymous) cannot execute that command.  That's all fine
>>> and
>> good, but the UX should probably be to fail instead on the
>> 'connect'
 when
>> in a secure environment.
>> 
>> Opinions?
>> 
>> The issue is that gfsh uses the 'ping' endpoint to determine
> connectivity,
>> which is not secured.  Moreover, it starts a connection poll,
>> hitting
> that
>> endpoint every 500ms to ensure the connection is still alive.  I
>>> can't
>> determine why it's doing this other than to try to wrap an
>> artificial
>> 'state' in to the stateless nature of REST.  The only advantage I
>> see
 is
>> that if I kill my server, gfsh knows right away that it's been
> disconnected
>> from it.
>> 
>> I have not yet determined whether or not the socket stays open
>>> through
> all
>> of this.  I suspect that it does or otherwise I'd see a lot of
>>> FIN_WAIT
>> entries in my netstat results.
>> 
>> One possible solution to this is to implement security in the
>>> endpoint.
>> But ShellCommandsContoller.java doesn't have any security in it.
> S

[jira] [Updated] (GEODE-2267) Add gfsh command to export all cluster artifacts

2017-02-02 Thread Jared Stewart (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stewart updated GEODE-2267:
-
Description: We would like a single gfsh command to collect and export all 
logfiles and stat files into a single package that will be returned to the gfsh 
client machine. This package (zipfile) can then be saved and attached to emails 
and Jira tickets to help evaluate the Geode cluster status.  (was: We would 
like a single gfsh command to collect and export all logfiles and stat files 
into a single package. This package (zipfile) can then be saved and attached to 
emails and Jira tickets to help evaluate the Geode cluster status.)

> Add gfsh command to export all cluster artifacts
> 
>
> Key: GEODE-2267
> URL: https://issues.apache.org/jira/browse/GEODE-2267
> Project: Geode
>  Issue Type: New Feature
>  Components: configuration, gfsh
>Reporter: Diane Hardman
>  Labels: ExportClusterArtifacts, export, gfsh, logging, statistics
>
> We would like a single gfsh command to collect and export all logfiles and 
> stat files into a single package that will be returned to the gfsh client 
> machine. This package (zipfile) can then be saved and attached to emails and 
> Jira tickets to help evaluate the Geode cluster status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2414) Determine a mechanism to stream a zip file from server to locator

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2414:


 Summary: Determine a mechanism to stream a zip file from server to 
locator
 Key: GEODE-2414
 URL: https://issues.apache.org/jira/browse/GEODE-2414
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


Our export command will execute a function on servers (one at a time) to build 
up a zip file of the artifacts for that server.  Then, the zip file needs to be 
sent back to the locator, so that the locator can aggregate together the files 
from all servers.  However, we need to make sure to chunk/stream the data that 
we send from server to locator so that neither member will run out of memory if 
the file is very large.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2415) Write a function to return a zip file for a single server

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2415:


 Summary: Write a function to return a zip file for a single server
 Key: GEODE-2415
 URL: https://issues.apache.org/jira/browse/GEODE-2415
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


We need to write a function to be executed on each server that will find the 
desired artifacts (logs, stat files, stack traces) on that server given the 
parameters of the export command (date limiting, --exclude-stats, etc) and 
return that zip file to the calling locator using the mechanism determined by 
GEODE-2414.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2416) Collect together artifacts from individual servers

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2416:


 Summary: Collect together artifacts from individual servers
 Key: GEODE-2416
 URL: https://issues.apache.org/jira/browse/GEODE-2416
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


We need a locator to unzip the individual zip files produced by GEODE-2415 and 
re-zip them together into a single zip file (with a directory for each member, 
containing the artifacts from that member).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: gfsh over http & authentication

2017-02-02 Thread Udo Kohlmeyer
Not sure what the correct polling time would be... But we'd want to 
avoid the situation where we poll too often. We could launch our own 
ping-attack.


Also, this UX is not UI, so the difference between updating a screen 
with new information vs checking if the connection is still valid... I'd 
err on the side of caution and say... poll 2s and elgantly deal with a 
connection failure when submitting an operation.


--Udo


On 2/2/17 12:07, Michael William Dodge wrote:

A rule I've heard for UX is that anything over 200 ms is noticeable and 
anything over 2 s is slow. Unless polling every 500 ms is causing problems, it 
might be best to leave it at 500 ms as a decent compromise between efficiency 
and responsiveness.

Sarge


On 2 Feb, 2017, at 12:04, John Blum  wrote:


The connection probably doesn't need to poll every 500ms.

500 ms provided a good (nearly consistent) UX for the user to know almost
instantly that the Manager went away, like the JMX counterpart.  2s is
arguable; 5s is probably too long as the user could already be typing
another command that is not available.  In that case they might get another
kind of error (don't recall for sure).  Anyway, food for thought.

If another endpoint is needed (though I cannot imagine why) perhaps `
securePing()` would be more descriptive and still offer up an alternative
route.



On Thu, Feb 2, 2017 at 11:55 AM, Kevin Duling  wrote:


Good to know some history on it.  The connection probably doesn't need to
poll every 500ms.  I would think 2 seconds or even 5 seconds would be
sufficient in the general case.

If we make ping require authentication, it may resolve the issue.  But I'm
not sure that's the right thing to do.  We could create a 'ping2' endpoint
(with some better name that I cannot currently think of) that does require
auth for this thread to validate the connection.

On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:


Back in the day, I introduced this "polling thread" to determine whether
*Gfsh* was still connected, since as you say, in a HTTP "stateless"
environment and in the absence of a "persistent" connection, it otherwise
does not know.

So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I

needed

to poll the Manager.  That way when the Manager was no longer available,

it

would display that *Gfsh* was no longer connected AND that the commands
that "require a connection" (e.g. `list region`) were no longer
available... again preserving the existing behavior in HTTP mode.

Security (basic auth) had not been implemented in *Gfsh* at that time

when

I created the Management REST API (or rather, it is more accurate to

say...

REST-like; it's not a true REST-ful interface to be precise, which is one
reason it never was made public for users to consume, though it could

have

been, providing we introduce the proper notion of  REST-ful resources
abstractions and change the endpoints (URIs) appropriately; anyway...).

-j


On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling 

wrote:

Yes it does, immediately on the connect.  So the behavior is different.

On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 

wrote:

Seems odd to me that the ‘connect’ command is where the credentials

are

supplied but the failures are only realized when invoking a secure
command.  So I would need to go back and disconnect / reconnect to

fix

a

password typo.

As a reference point, does ‘connect’ over JMX surface authentication
errors?

Anthony


On Feb 2, 2017, at 10:37 AM, Kevin Duling 

wrote:

It's been reported in GEODE-2247 that gfsh can connect in a secured
environment without a username/password when using the --use-http

flag.

When using a jmx connection, this would immediately prompt for
user/password.

In the http environment, the connection isn't any less secure.  The

moment

one attempts to execute a command that an "anonymous user" cannot

execute,

they will receive a failure with a message informing them that the

user

(in

this case anonymous) cannot execute that command.  That's all fine

and

good, but the UX should probably be to fail instead on the

'connect'

when

in a secure environment.

Opinions?

The issue is that gfsh uses the 'ping' endpoint to determine

connectivity,

which is not secured.  Moreover, it starts a connection poll,

hitting

that

endpoint every 500ms to ensure the connection is still alive.  I

can't

determine why it's doing this other than to try to wrap an

artificial

'state' in to the stateless nature of REST.  The only advantage I

see

is

that if I kill my server, gfsh knows right away that it's been

disconnected

from it.

I have not yet determined whether or not the socket stays open

through

all

of this.  I suspect that it does or otherwise I'd see a lot of

FIN_WAIT

entries in my netstat results.

One possible solution to this is to implement security in the

endpoint.

But ShellCommandsContoller.java doesn't have any security in it.

Security

is handled further downstream.





--
-Jo

[jira] [Updated] (GEODE-2416) Collect together artifacts from individual servers into a single zip file

2017-02-02 Thread Jared Stewart (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stewart updated GEODE-2416:
-
Summary: Collect together artifacts from individual servers into a single 
zip file  (was: Collect together artifacts from individual servers)

> Collect together artifacts from individual servers into a single zip file
> -
>
> Key: GEODE-2416
> URL: https://issues.apache.org/jira/browse/GEODE-2416
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>
> We need a locator to unzip the individual zip files produced by GEODE-2415 
> and re-zip them together into a single zip file (with a directory for each 
> member, containing the artifacts from that member).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2414) Determine a mechanism to stream a zip file from server to locator

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850430#comment-15850430
 ] 

ASF subversion and git services commented on GEODE-2414:


Commit c6fa2b9ebc86e1de3535fab6745814ea54ebd30d in geode's branch 
refs/heads/develop from [~gosullivan]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=c6fa2b9 ]

GEODE-2414: mark a failing test as flaky. This closes #382

This is just a temporary fix for CI until we can diagnose the issue.


> Determine a mechanism to stream a zip file from server to locator
> -
>
> Key: GEODE-2414
> URL: https://issues.apache.org/jira/browse/GEODE-2414
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>
> Our export command will execute a function on servers (one at a time) to 
> build up a zip file of the artifacts for that server.  Then, the zip file 
> needs to be sent back to the locator, so that the locator can aggregate 
> together the files from all servers.  However, we need to make sure to 
> chunk/stream the data that we send from server to locator so that neither 
> member will run out of memory if the file is very large.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #382: GEODE-2414: mark a failing test as flaky.

2017-02-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/geode/pull/382


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2414) Determine a mechanism to stream a zip file from server to locator

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850431#comment-15850431
 ] 

ASF GitHub Bot commented on GEODE-2414:
---

Github user asfgit closed the pull request at:

https://github.com/apache/geode/pull/382


> Determine a mechanism to stream a zip file from server to locator
> -
>
> Key: GEODE-2414
> URL: https://issues.apache.org/jira/browse/GEODE-2414
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>
> Our export command will execute a function on servers (one at a time) to 
> build up a zip file of the artifacts for that server.  Then, the zip file 
> needs to be sent back to the locator, so that the locator can aggregate 
> together the files from all servers.  However, we need to make sure to 
> chunk/stream the data that we send from server to locator so that neither 
> member will run out of memory if the file is very large.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2417) Generate zip file HTTP URL and manage deletion of zip files

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2417:


 Summary: Generate zip file HTTP URL and manage deletion of zip 
files
 Key: GEODE-2417
 URL: https://issues.apache.org/jira/browse/GEODE-2417
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


Once a locator has built the aggregated zip file described by GEODE-2416, we 
need that locator to expose an endpoint through the Admin REST api to allow 
access to that file (perhaps e.g. /exportedArtifact?exportId=foo or 
/exportedArtifact?name=myLogs.zip).  After the zip file has been successfully 
downloaded, it should be deleted and the URL invalidated.  Also, the URL should 
only be valid for some time period (say 24hrs) after which the file will be 
deleted and URL invalidated even if it is not downloaded, to prevent exported 
zip files from polluting the locator's disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2418) Add gfsh post execution handler to detect and download file URLs

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2418:


 Summary: Add gfsh post execution handler to detect and download 
file URLs
 Key: GEODE-2418
 URL: https://issues.apache.org/jira/browse/GEODE-2418
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


Rather than return the zip file contents in the 'export logs' command result 
from a locator to a gfsh client, we will return a URL to the exported zip file 
(GEODE-2417).  We need to write a gfsh post-execution handler (see 
`org.apache.geode.management.internal.cli.commands.ExportImportClusterConfigurationCommands.ExportInterceptor`)
 to extract the file URL from the result JSON and download that file via HTTP 
onto the gfsh client's disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2417) Generate zip file HTTP URL and manage deletion of zip files

2017-02-02 Thread Jared Stewart (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stewart updated GEODE-2417:
-
Description: Once a locator has built the aggregated zip file described by 
GEODE-2416, we need that locator to expose an endpoint through the Admin REST 
api to allow access to that file (perhaps e.g. /exportedArtifact?exportId=foo 
or /exportedArtifact?name=myLogs.zip).  After the zip file has been 
successfully downloaded, it should be deleted and the URL invalidated.  Also, 
the URL should only be valid for some time period (say 24hrs) after which the 
file will be deleted and URL invalidated even if it is not downloaded, to 
prevent exported zip files from polluting the locator's disk.  This URL should 
be returned in the 'export log' command result JSON rather than the file 
contents.  (was: Once a locator has built the aggregated zip file described by 
GEODE-2416, we need that locator to expose an endpoint through the Admin REST 
api to allow access to that file (perhaps e.g. /exportedArtifact?exportId=foo 
or /exportedArtifact?name=myLogs.zip).  After the zip file has been 
successfully downloaded, it should be deleted and the URL invalidated.  Also, 
the URL should only be valid for some time period (say 24hrs) after which the 
file will be deleted and URL invalidated even if it is not downloaded, to 
prevent exported zip files from polluting the locator's disk.)

> Generate zip file HTTP URL and manage deletion of zip files
> ---
>
> Key: GEODE-2417
> URL: https://issues.apache.org/jira/browse/GEODE-2417
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>
> Once a locator has built the aggregated zip file described by GEODE-2416, we 
> need that locator to expose an endpoint through the Admin REST api to allow 
> access to that file (perhaps e.g. /exportedArtifact?exportId=foo or 
> /exportedArtifact?name=myLogs.zip).  After the zip file has been successfully 
> downloaded, it should be deleted and the URL invalidated.  Also, the URL 
> should only be valid for some time period (say 24hrs) after which the file 
> will be deleted and URL invalidated even if it is not downloaded, to prevent 
> exported zip files from polluting the locator's disk.  This URL should be 
> returned in the 'export log' command result JSON rather than the file 
> contents.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2267) Add gfsh command to export all cluster artifacts

2017-02-02 Thread Jared Stewart (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850451#comment-15850451
 ] 

Jared Stewart edited comment on GEODE-2267 at 2/2/17 8:37 PM:
--

Per discussion on the dev list, we have decided to implement this as a change 
to the existing `export logs` command rather than as a new `export artifacts` 
command.  The default behavior should be to include stat files, but to provide 
an option like `--exclude-stats`.  We likely also want to run `export 
stacktrace` automatically for a user as the first step of running `export logs` 
to make the support/troubleshooting processes as easy as possible.


was (Author: jstewart):
Per discussion on the dev list, we have decided to implement this as a change 
to the existing `export logs` command rather than as a new `export artifacts` 
command.  The default behavior should be to include stat files, but to provide 
an option like `--exclude-stats`.  We likely also want to run `export 
stacktrace` automatically for a user as the first step of running `export logs` 
to make the support/troubleshooting processes as easy as possible.

> Add gfsh command to export all cluster artifacts
> 
>
> Key: GEODE-2267
> URL: https://issues.apache.org/jira/browse/GEODE-2267
> Project: Geode
>  Issue Type: New Feature
>  Components: configuration, gfsh
>Reporter: Diane Hardman
>  Labels: ExportClusterArtifacts, export, gfsh, logging, statistics
>
> We would like a single gfsh command to collect and export all logfiles and 
> stat files into a single package that will be returned to the gfsh client 
> machine. This package (zipfile) can then be saved and attached to emails and 
> Jira tickets to help evaluate the Geode cluster status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2267) Add gfsh command to export all cluster artifacts

2017-02-02 Thread Jared Stewart (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850451#comment-15850451
 ] 

Jared Stewart commented on GEODE-2267:
--

Per discussion on the dev list, we have decided to implement this as a change 
to the existing `export logs` command rather than as a new `export artifacts` 
command.  The default behavior should be to include stat files, but to provide 
an option like `--exclude-stats`.  We likely also want to run `export 
stacktrace` automatically for a user as the first step of running `export logs` 
to make the support/troubleshooting processes as easy as possible.

> Add gfsh command to export all cluster artifacts
> 
>
> Key: GEODE-2267
> URL: https://issues.apache.org/jira/browse/GEODE-2267
> Project: Geode
>  Issue Type: New Feature
>  Components: configuration, gfsh
>Reporter: Diane Hardman
>  Labels: ExportClusterArtifacts, export, gfsh, logging, statistics
>
> We would like a single gfsh command to collect and export all logfiles and 
> stat files into a single package that will be returned to the gfsh client 
> machine. This package (zipfile) can then be saved and attached to emails and 
> Jira tickets to help evaluate the Geode cluster status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2420) Warn a user if they try to export too much data

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2420:


 Summary: Warn a user if they try to export too much data
 Key: GEODE-2420
 URL: https://issues.apache.org/jira/browse/GEODE-2420
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


We should warn a user and prompt for confirmation before trying to perform an 
`export logs` operation that would result in a file over some threshold.  (Logs 
and stats have the potential to be very large.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2419) Add error message for export logs if Admin REST is disabled

2017-02-02 Thread Jared Stewart (JIRA)
Jared Stewart created GEODE-2419:


 Summary: Add error message for export logs if Admin REST is 
disabled
 Key: GEODE-2419
 URL: https://issues.apache.org/jira/browse/GEODE-2419
 Project: Geode
  Issue Type: Sub-task
Reporter: Jared Stewart


Our strategy for the revised `export logs` command relies on the locator 
running the Admin REST API, which is enabled by default but can be optionally 
disabled.  We need to add a good error message for the `export logs` command if 
Admin REST is disabled on the locator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: gfsh over http & authentication

2017-02-02 Thread John Blum
@Udo-

That is the thing.  It is not just UI; the ping Thread is also there to
properly set the state of *Gfsh* such that certain commands are not
inappropriately made available as well (think *Gfsh* scripting, which if I
remember correctly leads to a different type of error...
NotConnectedException (with an appropriate message) vs. whatever the 404
translated to), regardless of the polling time.

On Thu, Feb 2, 2017 at 12:22 PM, Udo Kohlmeyer 
wrote:

> Not sure what the correct polling time would be... But we'd want to avoid
> the situation where we poll too often. We could launch our own ping-attack.
>
> Also, this UX is not UI, so the difference between updating a screen with
> new information vs checking if the connection is still valid... I'd err on
> the side of caution and say... poll 2s and elgantly deal with a connection
> failure when submitting an operation.
>
> --Udo
>
>
>
> On 2/2/17 12:07, Michael William Dodge wrote:
>
>> A rule I've heard for UX is that anything over 200 ms is noticeable and
>> anything over 2 s is slow. Unless polling every 500 ms is causing problems,
>> it might be best to leave it at 500 ms as a decent compromise between
>> efficiency and responsiveness.
>>
>> Sarge
>>
>> On 2 Feb, 2017, at 12:04, John Blum  wrote:
>>>
>>> The connection probably doesn't need to poll every 500ms.

>>> 500 ms provided a good (nearly consistent) UX for the user to know almost
>>> instantly that the Manager went away, like the JMX counterpart.  2s is
>>> arguable; 5s is probably too long as the user could already be typing
>>> another command that is not available.  In that case they might get
>>> another
>>> kind of error (don't recall for sure).  Anyway, food for thought.
>>>
>>> If another endpoint is needed (though I cannot imagine why) perhaps `
>>> securePing()` would be more descriptive and still offer up an alternative
>>> route.
>>>
>>>
>>>
>>> On Thu, Feb 2, 2017 at 11:55 AM, Kevin Duling 
>>> wrote:
>>>
>>> Good to know some history on it.  The connection probably doesn't need to
 poll every 500ms.  I would think 2 seconds or even 5 seconds would be
 sufficient in the general case.

 If we make ping require authentication, it may resolve the issue.  But
 I'm
 not sure that's the right thing to do.  We could create a 'ping2'
 endpoint
 (with some better name that I cannot currently think of) that does
 require
 auth for this thread to validate the connection.

 On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:

 Back in the day, I introduced this "polling thread" to determine whether
> *Gfsh* was still connected, since as you say, in a HTTP "stateless"
> environment and in the absence of a "persistent" connection, it
> otherwise
> does not know.
>
> So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I
>
 needed

> to poll the Manager.  That way when the Manager was no longer
> available,
>
 it

> would display that *Gfsh* was no longer connected AND that the commands
> that "require a connection" (e.g. `list region`) were no longer
> available... again preserving the existing behavior in HTTP mode.
>
> Security (basic auth) had not been implemented in *Gfsh* at that time
>
 when

> I created the Management REST API (or rather, it is more accurate to
>
 say...

> REST-like; it's not a true REST-ful interface to be precise, which is
> one
> reason it never was made public for users to consume, though it could
>
 have

> been, providing we introduce the proper notion of  REST-ful resources
> abstractions and change the endpoints (URIs) appropriately; anyway...).
>
> -j
>
>
> On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling 
>
 wrote:

> Yes it does, immediately on the connect.  So the behavior is different.
>>
>> On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 
>>
> wrote:
>
>> Seems odd to me that the ‘connect’ command is where the credentials
>>>
>> are

> supplied but the failures are only realized when invoking a secure
>>> command.  So I would need to go back and disconnect / reconnect to
>>>
>> fix

> a
>
>> password typo.
>>>
>>> As a reference point, does ‘connect’ over JMX surface authentication
>>> errors?
>>>
>>> Anthony
>>>
>>> On Feb 2, 2017, at 10:37 AM, Kevin Duling 

>>> wrote:
>
>> It's been reported in GEODE-2247 that gfsh can connect in a secured
 environment without a username/password when using the --use-http

>>> flag.
>
>> When using a jmx connection, this would immediately prompt for
 user/password.

 In the http environment, the connection isn't any less secure.  The

>>> moment
>>>
 one attempts to execute a command that an "an

[jira] [Updated] (GEODE-2417) Generate zip file HTTP URL and manage deletion of zip files

2017-02-02 Thread Jared Stewart (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stewart updated GEODE-2417:
-
Description: Once a locator has built the aggregated zip file described by 
GEODE-2416, we need that locator to expose an endpoint through the Admin REST 
api to allow access to that file (perhaps e.g. /exportedArtifact?exportId=foo 
or /exportedArtifact?name=myLogs.zip).  After the zip file has been 
successfully downloaded, it should be deleted and the URL invalidated.  Also, 
the URL should only be valid for some time period (say 24hrs) after which the 
file will be deleted and URL invalidated even if it is not downloaded, to 
prevent exported zip files from polluting the locator's disk.  This URL should 
be returned in the 'export log' command result JSON rather than the file 
contents.  [Cluster:Read] permissions should be required to access the URL if 
integrated security is enabled.  (was: Once a locator has built the aggregated 
zip file described by GEODE-2416, we need that locator to expose an endpoint 
through the Admin REST api to allow access to that file (perhaps e.g. 
/exportedArtifact?exportId=foo or /exportedArtifact?name=myLogs.zip).  After 
the zip file has been successfully downloaded, it should be deleted and the URL 
invalidated.  Also, the URL should only be valid for some time period (say 
24hrs) after which the file will be deleted and URL invalidated even if it is 
not downloaded, to prevent exported zip files from polluting the locator's 
disk.  This URL should be returned in the 'export log' command result JSON 
rather than the file contents.)

> Generate zip file HTTP URL and manage deletion of zip files
> ---
>
> Key: GEODE-2417
> URL: https://issues.apache.org/jira/browse/GEODE-2417
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>
> Once a locator has built the aggregated zip file described by GEODE-2416, we 
> need that locator to expose an endpoint through the Admin REST api to allow 
> access to that file (perhaps e.g. /exportedArtifact?exportId=foo or 
> /exportedArtifact?name=myLogs.zip).  After the zip file has been successfully 
> downloaded, it should be deleted and the URL invalidated.  Also, the URL 
> should only be valid for some time period (say 24hrs) after which the file 
> will be deleted and URL invalidated even if it is not downloaded, to prevent 
> exported zip files from polluting the locator's disk.  This URL should be 
> returned in the 'export log' command result JSON rather than the file 
> contents.  [Cluster:Read] permissions should be required to access the URL if 
> integrated security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: gfsh over http & authentication

2017-02-02 Thread Jinmei Liao
I do believe the connect over http should fail immediately if incorrect or
no credentials are supplied. A securePing sounds like a good way to go.

On Thu, Feb 2, 2017 at 12:32 PM, John Blum  wrote:

> @Udo-
>
> That is the thing.  It is not just UI; the ping Thread is also there to
> properly set the state of *Gfsh* such that certain commands are not
> inappropriately made available as well (think *Gfsh* scripting, which if I
> remember correctly leads to a different type of error...
> NotConnectedException (with an appropriate message) vs. whatever the 404
> translated to), regardless of the polling time.
>
> On Thu, Feb 2, 2017 at 12:22 PM, Udo Kohlmeyer 
> wrote:
>
> > Not sure what the correct polling time would be... But we'd want to avoid
> > the situation where we poll too often. We could launch our own
> ping-attack.
> >
> > Also, this UX is not UI, so the difference between updating a screen with
> > new information vs checking if the connection is still valid... I'd err
> on
> > the side of caution and say... poll 2s and elgantly deal with a
> connection
> > failure when submitting an operation.
> >
> > --Udo
> >
> >
> >
> > On 2/2/17 12:07, Michael William Dodge wrote:
> >
> >> A rule I've heard for UX is that anything over 200 ms is noticeable and
> >> anything over 2 s is slow. Unless polling every 500 ms is causing
> problems,
> >> it might be best to leave it at 500 ms as a decent compromise between
> >> efficiency and responsiveness.
> >>
> >> Sarge
> >>
> >> On 2 Feb, 2017, at 12:04, John Blum  wrote:
> >>>
> >>> The connection probably doesn't need to poll every 500ms.
> 
> >>> 500 ms provided a good (nearly consistent) UX for the user to know
> almost
> >>> instantly that the Manager went away, like the JMX counterpart.  2s is
> >>> arguable; 5s is probably too long as the user could already be typing
> >>> another command that is not available.  In that case they might get
> >>> another
> >>> kind of error (don't recall for sure).  Anyway, food for thought.
> >>>
> >>> If another endpoint is needed (though I cannot imagine why) perhaps `
> >>> securePing()` would be more descriptive and still offer up an
> alternative
> >>> route.
> >>>
> >>>
> >>>
> >>> On Thu, Feb 2, 2017 at 11:55 AM, Kevin Duling 
> >>> wrote:
> >>>
> >>> Good to know some history on it.  The connection probably doesn't need
> to
>  poll every 500ms.  I would think 2 seconds or even 5 seconds would be
>  sufficient in the general case.
> 
>  If we make ping require authentication, it may resolve the issue.  But
>  I'm
>  not sure that's the right thing to do.  We could create a 'ping2'
>  endpoint
>  (with some better name that I cannot currently think of) that does
>  require
>  auth for this thread to validate the connection.
> 
>  On Thu, Feb 2, 2017 at 11:49 AM, John Blum  wrote:
> 
>  Back in the day, I introduced this "polling thread" to determine
> whether
> > *Gfsh* was still connected, since as you say, in a HTTP "stateless"
> > environment and in the absence of a "persistent" connection, it
> > otherwise
> > does not know.
> >
> > So, to simulate the behavior of *Gfsh* when connected via JMX RMI, I
> >
>  needed
> 
> > to poll the Manager.  That way when the Manager was no longer
> > available,
> >
>  it
> 
> > would display that *Gfsh* was no longer connected AND that the
> commands
> > that "require a connection" (e.g. `list region`) were no longer
> > available... again preserving the existing behavior in HTTP mode.
> >
> > Security (basic auth) had not been implemented in *Gfsh* at that time
> >
>  when
> 
> > I created the Management REST API (or rather, it is more accurate to
> >
>  say...
> 
> > REST-like; it's not a true REST-ful interface to be precise, which is
> > one
> > reason it never was made public for users to consume, though it could
> >
>  have
> 
> > been, providing we introduce the proper notion of  REST-ful resources
> > abstractions and change the endpoints (URIs) appropriately;
> anyway...).
> >
> > -j
> >
> >
> > On Thu, Feb 2, 2017 at 11:08 AM, Kevin Duling 
> >
>  wrote:
> 
> > Yes it does, immediately on the connect.  So the behavior is
> different.
> >>
> >> On Thu, Feb 2, 2017 at 10:48 AM, Anthony Baker 
> >>
> > wrote:
> >
> >> Seems odd to me that the ‘connect’ command is where the credentials
> >>>
> >> are
> 
> > supplied but the failures are only realized when invoking a secure
> >>> command.  So I would need to go back and disconnect / reconnect to
> >>>
> >> fix
> 
> > a
> >
> >> password typo.
> >>>
> >>> As a reference point, does ‘connect’ over JMX surface
> authentication
> >>> errors?
> >>>
> >>> Anthony
> >>>
> >>> On Feb 2, 2017, at 10:37 AM

[jira] [Commented] (GEODE-2413) peer-to-peer authentication: Peer need to re-authenticate coordinator while accepting view message

2017-02-02 Thread Jinmei Liao (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850493#comment-15850493
 ] 

Jinmei Liao commented on GEODE-2413:


Is this a feature request or a bug? I thought gateway sender/receiver should 
have mutual auth (even though it's not implemented in 8.2.x either), should 
peer-to-peer have mutual-auth as well?

> peer-to-peer authentication: Peer need to re-authenticate coordinator while 
> accepting view message
> --
>
> Key: GEODE-2413
> URL: https://issues.apache.org/jira/browse/GEODE-2413
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Hitesh Khamesra
> Fix For: 1.1.0
>
>
> In peer-to-peer authentication, coordinator authenticates the joining member. 
> Then coordinator includes that new member in the cluster and sends new view 
> message. This view message should include coordinator's credential so that 
> joining member can authenticate coordinator as well (i.e. mutual 
> authentication)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #383: GEODE-2206: Add junit-quickcheck to geode-core.

2017-02-02 Thread galen-pivotal
GitHub user galen-pivotal opened a pull request:

https://github.com/apache/geode/pull/383

GEODE-2206: Add junit-quickcheck to geode-core.

* Rewrite a data serialization test to use junit-quickcheck.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/galen-pivotal/incubator-geode 
feature/GEODE-2206

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/383.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #383


commit 778f5802b6136637ef50f8ef506efb40021cc758
Author: Galen O'Sullivan 
Date:   2017-01-31T23:05:17Z

GEODE-2206: Add junit-quickcheck to geode-core.

* Rewrite a data serialization test to use junit-quickcheck.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2206) Add junit-quickcheck to Gradle test dependencies.

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850507#comment-15850507
 ] 

ASF GitHub Bot commented on GEODE-2206:
---

GitHub user galen-pivotal opened a pull request:

https://github.com/apache/geode/pull/383

GEODE-2206: Add junit-quickcheck to geode-core.

* Rewrite a data serialization test to use junit-quickcheck.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/galen-pivotal/incubator-geode 
feature/GEODE-2206

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/383.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #383


commit 778f5802b6136637ef50f8ef506efb40021cc758
Author: Galen O'Sullivan 
Date:   2017-01-31T23:05:17Z

GEODE-2206: Add junit-quickcheck to geode-core.

* Rewrite a data serialization test to use junit-quickcheck.




> Add junit-quickcheck to Gradle test dependencies.
> -
>
> Key: GEODE-2206
> URL: https://issues.apache.org/jira/browse/GEODE-2206
> Project: Geode
>  Issue Type: Improvement
>Reporter: Galen O'Sullivan
>Assignee: Galen O'Sullivan
>
> Unit tests allow us to test cases we know about and have thought of. 
> Property-based testing allows us to test those, and some cases we haven't 
> thought of -- you're essentially fuzzing a limited subset of the code. 
> {{junit-quickcheck}} makes it easy to write "property-based" tests with 
> generators for the builtin types. You can also constrain input or build 
> custom generators for constrained data.
> I think this would be especially helpful for testing areas like PDX 
> serialization, which should be able to accept any serializable object a user 
> creates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Review Request 56242: GEODE-2206: Add junit-quickcheck to geode-core; add a test that uses it.

2017-02-02 Thread Galen O'Sullivan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56242/
---

Review request for geode, Bruce Schuchardt, Hitesh Khamesra, Kirk Lund, and Udo 
Kohlmeyer.


Repository: geode


Description
---

This adds a test dependency on `junit-quickcheck` (and 
`junit-quickcheck-generators` and `junit-quickcheck-guava`) to geode-core. I've 
included an example test of one of the cases in which property-based testing is 
particularly nice: when you have two operations that should reverse each other 
and want to test them with as much garbage as possible.

Property-based testing means basically that you write a function that tests 
some code and checks some conditions you expect to hold true for all inputs, 
and then have a computer program test all sorts of weird inputs to try prove 
you wrong.

Because the test data is randomly generated, you get to test against more 
inputs than you might even think of, and because the seed is saved, the test is 
reproducible. If `junit-quickcheck` finds one failure, it will even try to 
narrow down to a smallest example of a test failure.

I'm about to send an email to the dev list soliciting feedback.


Diffs
-

  geode-core/build.gradle 3c2a2abf5 
  
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 PRE-CREATION 
  
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerRandomizedJUnitTest.java
 f361de4a2 

Diff: https://reviews.apache.org/r/56242/diff/


Testing
---

The test passes on my machine. This is mostly just adding a dependency, so 
there's not a lot here to test.

I've read some of the source of junit-quickcheck and looked into the data it 
generates: integral numbers seem pretty reasonable. Strings tend to be 
short-ish (length up to hundreds with hundreds of iterations, thousands with 
thousands), but are made up of random codepoints, which is nice.


Thanks,

Galen O'Sullivan



Property-Based Testing for Geode

2017-02-02 Thread Galen M O'Sullivan
Hi all,

I would like to propose adding [junit-quickcheck](1) to Geode. It's named
after the [Haskell tool](2) and functions more or less as automated testing
for JUnit Theories (if anyone is familiar with those).

Property-based testing means basically that you write a function that tests
some code and checks some conditions you expect to hold true for all
inputs, and then have a computer program test all sorts of weird inputs to
try prove you wrong.

Because the test data is randomly generated, you get to test against more
inputs than you might even think of, and because the seed is saved, the
test is reproducible. If junit-quickcheck finds one failure, it will even
try to narrow down to a smallest example of a test failure.

There are some limitations to this library -- for example, it doesn't tend
to generate strings more than a few hundred characters in length (though
this increases with sample size, so if you kick the sample size up, you can
get into the thousands fairly quickly).

[ScalaCheck](3) is another option that does much the same and seems to have
more functionality (and in particular, it seems to be able to handle state
in tests), but it requires Scala to run and tests are also written in Scala
(though it can test Java code). I don't think there will be much support
for including Scala as a dependency for Geode.

I've put up a Review board request and PR:
https://reviews.apache.org/r/56242/
https://github.com/apache/geode/pull/383


I'd like to hear the community's input.

Thanks,
Galen O'Sullivan

[1]: http://pholser.github.io/junit-quickcheck/site/0.7/
[2]: http://www.cse.chalmers.se/~rjmh/QuickCheck/manual.html
[3]: http://www.scalacheck.org/index.html


[jira] [Commented] (GEODE-2386) Unable to launch dunit VMs in nightly builds

2017-02-02 Thread Dan Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850537#comment-15850537
 ] 

Dan Smith commented on GEODE-2386:
--

My previous theory about why the classpath was set to the gradle wrapper was 
wrong. I was able to reproduce this failure by applying the attached 
reproduce.patch and running this:

./gradlew geode-lucene:distributedTest --tests '*MySuiteDUnitTest*' 

This suite reproduces the order in which the nightly builds are running the 
lucene tests.

What I found is that LuceneClusterConfigurationDUnitTest ends up invoking the 
DistributedRestoreSystemProperties rule twice, once because it extends 
CliCommandTestBase and once because it uses LocatorServerStartupRule.

DistributedRestoreSystemProperties has bug where it if it is invoked twice, it 
ends up calling System.setProperties(null) because it nulls out the value of 
originalProperties in the first call to after.

if you call System.setProperties(null), the JVM calls the native initProperties 
and sets the system properties to their original values generated by the JVM. 
This loses the new value of java.class.path that is set gradle's 
BootstrapSecurityManager.


> Unable to launch dunit VMs in nightly builds
> 
>
> Key: GEODE-2386
> URL: https://issues.apache.org/jira/browse/GEODE-2386
> Project: Geode
>  Issue Type: Bug
>  Components: build
>Reporter: Dan Smith
> Fix For: 1.1.0
>
>
> The recent apache nightly builds for the release branch and develop are 
> seeing lucene tests fail with "java.lang.RuntimeException: Unable to launch 
> dunit VMs". In the logs we see this error message:
> "[locator] Error: Could not find or load main class 
> org.apache.geode.test.dunit.standalone.ChildVM"
> We need to figure out what's going on.
> https://builds.apache.org/job/Geode-release/40/#showFailuresLink
> https://builds.apache.org/job/Geode-nightly/731/
> {noformat}
> Stacktrace
> java.lang.RuntimeException: Unable to launch dunit VMs
>   at 
> org.apache.geode.test.dunit.standalone.DUnitLauncher.launchIfNeeded(DUnitLauncher.java:144)
>   at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.initializeDistributedTestCase(JUnit4DistributedTestCase.java:131)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  

[jira] [Updated] (GEODE-2386) Unable to launch dunit VMs in nightly builds

2017-02-02 Thread Dan Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Smith updated GEODE-2386:
-
Attachment: reproduce.patch

> Unable to launch dunit VMs in nightly builds
> 
>
> Key: GEODE-2386
> URL: https://issues.apache.org/jira/browse/GEODE-2386
> Project: Geode
>  Issue Type: Bug
>  Components: build
>Reporter: Dan Smith
> Fix For: 1.1.0
>
> Attachments: reproduce.patch
>
>
> The recent apache nightly builds for the release branch and develop are 
> seeing lucene tests fail with "java.lang.RuntimeException: Unable to launch 
> dunit VMs". In the logs we see this error message:
> "[locator] Error: Could not find or load main class 
> org.apache.geode.test.dunit.standalone.ChildVM"
> We need to figure out what's going on.
> https://builds.apache.org/job/Geode-release/40/#showFailuresLink
> https://builds.apache.org/job/Geode-nightly/731/
> {noformat}
> Stacktrace
> java.lang.RuntimeException: Unable to launch dunit VMs
>   at 
> org.apache.geode.test.dunit.standalone.DUnitLauncher.launchIfNeeded(DUnitLauncher.java:144)
>   at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.initializeDistributedTestCase(JUnit4DistributedTestCase.java:131)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: VMs did not start up within 120 seconds
>   at 
> org.apache.geode.test.dunit.standalone.DUnitLauncher.launch(DUnitLauncher.java:220)
>   at 
> org.apache.geode.test.dunit.standalone.DUnitLauncher.launchIfNeeded(DUnitLauncher.java:142)
>   ... 34 more
> Standard Output
> Executing [/usr/local/asfpackages/java/jdk1.8.0_102/jre/bin/java, -classpath, 
> /home/jenkins/jenkins-slave/workspace/Geode-release/cach

[jira] [Comment Edited] (GEODE-2386) Unable to launch dunit VMs in nightly builds

2017-02-02 Thread Dan Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850537#comment-15850537
 ] 

Dan Smith edited comment on GEODE-2386 at 2/2/17 9:28 PM:
--

My previous theory about why the classpath was set to the gradle wrapper was 
wrong. I was able to reproduce this failure by applying the attached 
reproduce.patch and running this:

{code}
./gradlew geode-lucene:distributedTest --tests '*MySuiteDUnitTest*'
{code}

This suite reproduces the order in which the nightly builds are running the 
lucene tests.

What I found is that LuceneClusterConfigurationDUnitTest ends up invoking the 
DistributedRestoreSystemProperties rule twice, once because it extends 
CliCommandTestBase and once because it uses LocatorServerStartupRule.

DistributedRestoreSystemProperties has bug where it if it is invoked twice, it 
ends up calling System.setProperties(null) because it nulls out the value of 
originalProperties in the first call to after.

if you call System.setProperties(null), the JVM calls the native initProperties 
and sets the system properties to their original values generated by the JVM. 
This loses the new value of java.class.path that is set gradle's 
BootstrapSecurityManager.



was (Author: upthewaterspout):
My previous theory about why the classpath was set to the gradle wrapper was 
wrong. I was able to reproduce this failure by applying the attached 
reproduce.patch and running this:

./gradlew geode-lucene:distributedTest --tests '*MySuiteDUnitTest*' 

This suite reproduces the order in which the nightly builds are running the 
lucene tests.

What I found is that LuceneClusterConfigurationDUnitTest ends up invoking the 
DistributedRestoreSystemProperties rule twice, once because it extends 
CliCommandTestBase and once because it uses LocatorServerStartupRule.

DistributedRestoreSystemProperties has bug where it if it is invoked twice, it 
ends up calling System.setProperties(null) because it nulls out the value of 
originalProperties in the first call to after.

if you call System.setProperties(null), the JVM calls the native initProperties 
and sets the system properties to their original values generated by the JVM. 
This loses the new value of java.class.path that is set gradle's 
BootstrapSecurityManager.


> Unable to launch dunit VMs in nightly builds
> 
>
> Key: GEODE-2386
> URL: https://issues.apache.org/jira/browse/GEODE-2386
> Project: Geode
>  Issue Type: Bug
>  Components: build
>Reporter: Dan Smith
> Fix For: 1.1.0
>
> Attachments: reproduce.patch
>
>
> The recent apache nightly builds for the release branch and develop are 
> seeing lucene tests fail with "java.lang.RuntimeException: Unable to launch 
> dunit VMs". In the logs we see this error message:
> "[locator] Error: Could not find or load main class 
> org.apache.geode.test.dunit.standalone.ChildVM"
> We need to figure out what's going on.
> https://builds.apache.org/job/Geode-release/40/#showFailuresLink
> https://builds.apache.org/job/Geode-nightly/731/
> {noformat}
> Stacktrace
> java.lang.RuntimeException: Unable to launch dunit VMs
>   at 
> org.apache.geode.test.dunit.standalone.DUnitLauncher.launchIfNeeded(DUnitLauncher.java:144)
>   at 
> org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.initializeDistributedTestCase(JUnit4DistributedTestCase.java:131)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl

[GitHub] geode issue #383: GEODE-2206: Add junit-quickcheck to geode-core.

2017-02-02 Thread galen-pivotal
Github user galen-pivotal commented on the issue:

https://github.com/apache/geode/pull/383
  
@scmbuildguy : done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2206) Add junit-quickcheck to Gradle test dependencies.

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850550#comment-15850550
 ] 

ASF GitHub Bot commented on GEODE-2206:
---

Github user galen-pivotal commented on the issue:

https://github.com/apache/geode/pull/383
  
@scmbuildguy : done.


> Add junit-quickcheck to Gradle test dependencies.
> -
>
> Key: GEODE-2206
> URL: https://issues.apache.org/jira/browse/GEODE-2206
> Project: Geode
>  Issue Type: Improvement
>Reporter: Galen O'Sullivan
>Assignee: Galen O'Sullivan
>
> Unit tests allow us to test cases we know about and have thought of. 
> Property-based testing allows us to test those, and some cases we haven't 
> thought of -- you're essentially fuzzing a limited subset of the code. 
> {{junit-quickcheck}} makes it easy to write "property-based" tests with 
> generators for the builtin types. You can also constrain input or build 
> custom generators for constrained data.
> I think this would be especially helpful for testing areas like PDX 
> serialization, which should be able to accept any serializable object a user 
> creates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56242: GEODE-2206: Add junit-quickcheck to geode-core; add a test that uses it.

2017-02-02 Thread Galen O'Sullivan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56242/
---

(Updated Feb. 2, 2017, 9:35 p.m.)


Review request for geode, Bruce Schuchardt, Hitesh Khamesra, Kirk Lund, and Udo 
Kohlmeyer.


Changes
---

Put versions in `gradle/dependency-versions.properties`.


Repository: geode


Description
---

This adds a test dependency on `junit-quickcheck` (and 
`junit-quickcheck-generators` and `junit-quickcheck-guava`) to geode-core. I've 
included an example test of one of the cases in which property-based testing is 
particularly nice: when you have two operations that should reverse each other 
and want to test them with as much garbage as possible.

Property-based testing means basically that you write a function that tests 
some code and checks some conditions you expect to hold true for all inputs, 
and then have a computer program test all sorts of weird inputs to try prove 
you wrong.

Because the test data is randomly generated, you get to test against more 
inputs than you might even think of, and because the seed is saved, the test is 
reproducible. If `junit-quickcheck` finds one failure, it will even try to 
narrow down to a smallest example of a test failure.

I'm about to send an email to the dev list soliciting feedback.


Diffs (updated)
-

  geode-core/build.gradle 3c2a2abf5 
  
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 PRE-CREATION 
  
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerRandomizedJUnitTest.java
 f361de4a2 
  gradle/dependency-versions.properties fbc76e012 

Diff: https://reviews.apache.org/r/56242/diff/


Testing
---

The test passes on my machine. This is mostly just adding a dependency, so 
there's not a lot here to test.

I've read some of the source of junit-quickcheck and looked into the data it 
generates: integral numbers seem pretty reasonable. Strings tend to be 
short-ish (length up to hundreds with hundreds of iterations, thousands with 
thousands), but are made up of random codepoints, which is nice.


Thanks,

Galen O'Sullivan



Review Request 56243: GEODE-2386 Don't call System.setProperties(null) when rule is used twice

2017-02-02 Thread Dan Smith

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56243/
---

Review request for geode and Kirk Lund.


Repository: geode


Description
---

Fixing DistributedRestoreSystemProperties rule so that if the rule is
included twice within a test, it does not end up calling
System.setProperties(null). That will prevent us from losing the value
of java.class.path set by gradle.

Reverting the workaround that didn't actually work.


Diffs
-

  
geode-core/src/test/java/org/apache/geode/test/dunit/rules/DistributedRestoreSystemProperties.java
 7e6198e865e33908c0e89f4e0a0c20328f56d55e 
  
geode-core/src/test/java/org/apache/geode/test/dunit/standalone/ProcessManager.java
 3b02b4b5e320849e431e9f6720451452639d4c65 

Diff: https://reviews.apache.org/r/56243/diff/


Testing
---


Thanks,

Dan Smith



[jira] [Created] (GEODE-2421) Create VS2015 AMI

2017-02-02 Thread Ernest Burghardt (JIRA)
Ernest Burghardt created GEODE-2421:
---

 Summary: Create VS2015 AMI
 Key: GEODE-2421
 URL: https://issues.apache.org/jira/browse/GEODE-2421
 Project: Geode
  Issue Type: Task
  Components: native client
Reporter: Ernest Burghardt






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56243: GEODE-2386 Don't call System.setProperties(null) when rule is used twice

2017-02-02 Thread Kirk Lund

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56243/#review164028
---


Ship it!




Ship It!

- Kirk Lund


On Feb. 2, 2017, 9:38 p.m., Dan Smith wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56243/
> ---
> 
> (Updated Feb. 2, 2017, 9:38 p.m.)
> 
> 
> Review request for geode and Kirk Lund.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Fixing DistributedRestoreSystemProperties rule so that if the rule is
> included twice within a test, it does not end up calling
> System.setProperties(null). That will prevent us from losing the value
> of java.class.path set by gradle.
> 
> Reverting the workaround that didn't actually work.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/test/java/org/apache/geode/test/dunit/rules/DistributedRestoreSystemProperties.java
>  7e6198e865e33908c0e89f4e0a0c20328f56d55e 
>   
> geode-core/src/test/java/org/apache/geode/test/dunit/standalone/ProcessManager.java
>  3b02b4b5e320849e431e9f6720451452639d4c65 
> 
> Diff: https://reviews.apache.org/r/56243/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dan Smith
> 
>



[GitHub] geode pull request #384: GEODE-2421: Adding packer portion of making a VS201...

2017-02-02 Thread echobravopapa
GitHub user echobravopapa opened a pull request:

https://github.com/apache/geode/pull/384

GEODE-2421: Adding packer portion of making a VS2015 dev AMI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/echobravopapa/geode feature/GEODE-2421

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #384


commit 22638e2c57f66d511a8cb16d68c76ace834c16d8
Author: Ernest Burghardt 
Date:   2017-02-02T22:03:10Z

GEODE-2421: Adding packer portion of making a VS2015 dev AMI




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2421) Create VS2015 AMI

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850579#comment-15850579
 ] 

ASF GitHub Bot commented on GEODE-2421:
---

GitHub user echobravopapa opened a pull request:

https://github.com/apache/geode/pull/384

GEODE-2421: Adding packer portion of making a VS2015 dev AMI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/echobravopapa/geode feature/GEODE-2421

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #384


commit 22638e2c57f66d511a8cb16d68c76ace834c16d8
Author: Ernest Burghardt 
Date:   2017-02-02T22:03:10Z

GEODE-2421: Adding packer portion of making a VS2015 dev AMI




> Create VS2015 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56243: GEODE-2386 Don't call System.setProperties(null) when rule is used twice

2017-02-02 Thread Jared Stewart

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56243/#review164031
---


Ship it!




Ship It!

- Jared Stewart


On Feb. 2, 2017, 9:38 p.m., Dan Smith wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56243/
> ---
> 
> (Updated Feb. 2, 2017, 9:38 p.m.)
> 
> 
> Review request for geode and Kirk Lund.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Fixing DistributedRestoreSystemProperties rule so that if the rule is
> included twice within a test, it does not end up calling
> System.setProperties(null). That will prevent us from losing the value
> of java.class.path set by gradle.
> 
> Reverting the workaround that didn't actually work.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/test/java/org/apache/geode/test/dunit/rules/DistributedRestoreSystemProperties.java
>  7e6198e865e33908c0e89f4e0a0c20328f56d55e 
>   
> geode-core/src/test/java/org/apache/geode/test/dunit/standalone/ProcessManager.java
>  3b02b4b5e320849e431e9f6720451452639d4c65 
> 
> Diff: https://reviews.apache.org/r/56243/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dan Smith
> 
>



Re: Volunteer For Creating February Board Report

2017-02-02 Thread Dave Barnes
I'll volunteer for February.

On Thu, Feb 2, 2017 at 10:50 AM, Mark Bretl  wrote:

> Hi,
>
> Its that time again for us to create a board report for the February, the
> 15th, board meeting, are there any volunteers for creating a draft?
>
> Initial report should be submitted next wednesday, February 8th. We can
> edit the report until Friday the 10th, which then it is customary for board
> members to start reviewing reports.
>
> Template report can be found at:
> https://cwiki.apache.org/confluence/display/GEODE/ASF+
> Board+Report+Template
>
> Best Regards,
>
> --Mark
>


[jira] [Assigned] (GEODE-2408) Refactor CacheableDate to use C++ std::chrono

2017-02-02 Thread Jacob S. Barrett (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob S. Barrett reassigned GEODE-2408:
---

Assignee: Jacob S. Barrett

> Refactor CacheableDate to use C++ std::chrono
> -
>
> Key: GEODE-2408
> URL: https://issues.apache.org/jira/browse/GEODE-2408
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Jacob S. Barrett
>Assignee: Jacob S. Barrett
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2422) Finish converting from GemFire to Geode in cppcache src

2017-02-02 Thread Michael Martell (JIRA)
Michael Martell created GEODE-2422:
--

 Summary: Finish converting from GemFire to Geode in cppcache src
 Key: GEODE-2422
 URL: https://issues.apache.org/jira/browse/GEODE-2422
 Project: Geode
  Issue Type: Task
  Components: native client
Reporter: Michael Martell


There are still some classes in the cppcache src that were not converted to 
Geode from GemFire.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2421) Create VS2015 AMI

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850619#comment-15850619
 ] 

ASF subversion and git services commented on GEODE-2421:


Commit 340f2fca80d9388155ed0911712f9a830211b32b in geode's branch 
refs/heads/next-gen-native-client-software-grant from [~eburghardt]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=340f2fc ]

GEODE-2421: Adding packer portion of making a VS2015 dev AMI

This closes #384


> Create VS2015 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #383: GEODE-2206: Add junit-quickcheck to geode-core.

2017-02-02 Thread jaredjstewart
Github user jaredjstewart commented on a diff in the pull request:

https://github.com/apache/geode/pull/383#discussion_r99235354
  
--- Diff: 
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 ---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal;
+
+import static org.junit.Assert.*;
+
+import com.pholser.junit.quickcheck.Property;
+import com.pholser.junit.quickcheck.runner.JUnitQuickcheck;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+
+import java.io.ByteArrayOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Tests the serialization and deserialization of randomly generated 
Strings.
+ *
+ * The current implementation (0.7 or 0.8alpha2) of 
junit-quickcheck-generators only generates valid
+ * codepoints, and that it doesn't tend to test strings that are 
particularly long, though the more
+ * trials you run, the longer they get.
+ */
+@Category(UnitTest.class)
+@RunWith(JUnitQuickcheck.class)
+public class InternalDataSerializerQuickcheckStringTest {
+  @Property(trials = 1000)
+  public void StringSerializedDeserializesToSameValue(String 
originalString) throws IOException {
+ByteArrayOutputStream byteArrayOutputStream = new 
ByteArrayOutputStream();
+DataOutputStream dataOutputStream = new 
DataOutputStream(byteArrayOutputStream);
+
+DataSerializer.writeString(originalString, dataOutputStream);
+dataOutputStream.flush();
+
+byte[] stringBytes = byteArrayOutputStream.toByteArray();
+DataInputStream dataInputStream = new DataInputStream(new 
ByteArrayInputStream(stringBytes));
+String returnedString = DataSerializer.readString(dataInputStream);
+
+assertEquals("Deserialized string matches original", originalString, 
returnedString);
+  }
+
+  @Before
+  public void setUp() {
+// this may be unnecessary, but who knows what tests run before us.
+InternalDataSerializer.reinitialize();
--- End diff --

If the goal here is to protect against tests that ran before this, you may 
want to use `@BeforeClass` instead of `@Before` so that only gets invoked once 
for the class, rather than for each of the (1000) trials.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2206) Add junit-quickcheck to Gradle test dependencies.

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850622#comment-15850622
 ] 

ASF GitHub Bot commented on GEODE-2206:
---

Github user jaredjstewart commented on a diff in the pull request:

https://github.com/apache/geode/pull/383#discussion_r99235354
  
--- Diff: 
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 ---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal;
+
+import static org.junit.Assert.*;
+
+import com.pholser.junit.quickcheck.Property;
+import com.pholser.junit.quickcheck.runner.JUnitQuickcheck;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+
+import java.io.ByteArrayOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Tests the serialization and deserialization of randomly generated 
Strings.
+ *
+ * The current implementation (0.7 or 0.8alpha2) of 
junit-quickcheck-generators only generates valid
+ * codepoints, and that it doesn't tend to test strings that are 
particularly long, though the more
+ * trials you run, the longer they get.
+ */
+@Category(UnitTest.class)
+@RunWith(JUnitQuickcheck.class)
+public class InternalDataSerializerQuickcheckStringTest {
+  @Property(trials = 1000)
+  public void StringSerializedDeserializesToSameValue(String 
originalString) throws IOException {
+ByteArrayOutputStream byteArrayOutputStream = new 
ByteArrayOutputStream();
+DataOutputStream dataOutputStream = new 
DataOutputStream(byteArrayOutputStream);
+
+DataSerializer.writeString(originalString, dataOutputStream);
+dataOutputStream.flush();
+
+byte[] stringBytes = byteArrayOutputStream.toByteArray();
+DataInputStream dataInputStream = new DataInputStream(new 
ByteArrayInputStream(stringBytes));
+String returnedString = DataSerializer.readString(dataInputStream);
+
+assertEquals("Deserialized string matches original", originalString, 
returnedString);
+  }
+
+  @Before
+  public void setUp() {
+// this may be unnecessary, but who knows what tests run before us.
+InternalDataSerializer.reinitialize();
--- End diff --

If the goal here is to protect against tests that ran before this, you may 
want to use `@BeforeClass` instead of `@Before` so that only gets invoked once 
for the class, rather than for each of the (1000) trials.


> Add junit-quickcheck to Gradle test dependencies.
> -
>
> Key: GEODE-2206
> URL: https://issues.apache.org/jira/browse/GEODE-2206
> Project: Geode
>  Issue Type: Improvement
>Reporter: Galen O'Sullivan
>Assignee: Galen O'Sullivan
>
> Unit tests allow us to test cases we know about and have thought of. 
> Property-based testing allows us to test those, and some cases we haven't 
> thought of -- you're essentially fuzzing a limited subset of the code. 
> {{junit-quickcheck}} makes it easy to write "property-based" tests with 
> generators for the builtin types. You can also constrain input or build 
> custom generators for constrained data.
> I think this would be especially helpful for testing areas like PDX 
> serialization, which should be able to accept any serializable object a user 
> creates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2365) update clicache/src

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850626#comment-15850626
 ] 

ASF GitHub Bot commented on GEODE-2365:
---

Github user pivotal-jbarrett commented on the issue:

https://github.com/apache/geode/pull/375
  
Merged, please close


> update clicache/src
> ---
>
> Key: GEODE-2365
> URL: https://issues.apache.org/jira/browse/GEODE-2365
> Project: Geode
>  Issue Type: Sub-task
>  Components: native client
>Reporter: Michael Martell
>
> Update all sources below src/clicache/src from gemfire to geode



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode issue #375: GEODE-2365: Replace gemfire with geode in clicache src.

2017-02-02 Thread pivotal-jbarrett
Github user pivotal-jbarrett commented on the issue:

https://github.com/apache/geode/pull/375
  
Merged, please close


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2365) update clicache/src

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850625#comment-15850625
 ] 

ASF subversion and git services commented on GEODE-2365:


Commit fc9f1f6f5741d7f06077dda5208326f6e30abb94 in geode's branch 
refs/heads/next-gen-native-client-software-grant from [~mmartell]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=fc9f1f6 ]

GEODE-2365: Replace gemfire with geode in clicache.

This closes #375


> update clicache/src
> ---
>
> Key: GEODE-2365
> URL: https://issues.apache.org/jira/browse/GEODE-2365
> Project: Geode
>  Issue Type: Sub-task
>  Components: native client
>Reporter: Michael Martell
>
> Update all sources below src/clicache/src from gemfire to geode



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2422) Finish converting from GemFire to Geode in cppcache src

2017-02-02 Thread Michael Martell (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Martell updated GEODE-2422:
---
Description: 
There are still some classes in the cppcache src that were not converted to 
Geode from GemFire. For example, GemFireException.

Also, change corresponding clicache source that uses these.

  was:There are still some classes in the cppcache src that were not converted 
to Geode from GemFire.


> Finish converting from GemFire to Geode in cppcache src
> ---
>
> Key: GEODE-2422
> URL: https://issues.apache.org/jira/browse/GEODE-2422
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Michael Martell
>
> There are still some classes in the cppcache src that were not converted to 
> Geode from GemFire. For example, GemFireException.
> Also, change corresponding clicache source that uses these.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2317) native client cmake build should honor GEODE_HOME environment variable

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850629#comment-15850629
 ] 

ASF GitHub Bot commented on GEODE-2317:
---

Github user pivotal-jbarrett closed the pull request at:

https://github.com/apache/geode/pull/379


> native client cmake build should honor GEODE_HOME environment variable
> --
>
> Key: GEODE-2317
> URL: https://issues.apache.org/jira/browse/GEODE-2317
> Project: Geode
>  Issue Type: Improvement
>  Components: native client
>Reporter: Dan Smith
>Assignee: Jacob S. Barrett
>
> The native client build currently looks for a GEODE_ROOT variable. However, 
> the convention in the java project and the geode-examples is to use a 
> GEODE_HOME environment variable to specify the location of geode. The native 
> client build should look for this environment variable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2317) native client cmake build should honor GEODE_HOME environment variable

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850628#comment-15850628
 ] 

ASF subversion and git services commented on GEODE-2317:


Commit c2761c0ff1b6271fffbac1ebc683b6a6e96d3a35 in geode's branch 
refs/heads/next-gen-native-client-software-grant from Jacob Barrett
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=c2761c0 ]

GEODE-2317: FindGeode searches GEODE_HOME environment variable.


> native client cmake build should honor GEODE_HOME environment variable
> --
>
> Key: GEODE-2317
> URL: https://issues.apache.org/jira/browse/GEODE-2317
> Project: Geode
>  Issue Type: Improvement
>  Components: native client
>Reporter: Dan Smith
>Assignee: Jacob S. Barrett
>
> The native client build currently looks for a GEODE_ROOT variable. However, 
> the convention in the java project and the geode-examples is to use a 
> GEODE_HOME environment variable to specify the location of geode. The native 
> client build should look for this environment variable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #384: GEODE-2421: Adding packer portion of making a VS201...

2017-02-02 Thread echobravopapa
Github user echobravopapa closed the pull request at:

https://github.com/apache/geode/pull/384


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode pull request #385: [GEODE-2408] Refactor CacheableDate to use C++ std:...

2017-02-02 Thread pivotal-jbarrett
GitHub user pivotal-jbarrett opened a pull request:

https://github.com/apache/geode/pull/385

[GEODE-2408] Refactor CacheableDate to use C++ std::chrono



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pivotal-jbarrett/geode feature/GEODE-2408

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #385






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2421) Create VS2015 AMI

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850631#comment-15850631
 ] 

ASF GitHub Bot commented on GEODE-2421:
---

Github user echobravopapa closed the pull request at:

https://github.com/apache/geode/pull/384


> Create VS2015 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2408) Refactor CacheableDate to use C++ std::chrono

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850632#comment-15850632
 ] 

ASF GitHub Bot commented on GEODE-2408:
---

GitHub user pivotal-jbarrett opened a pull request:

https://github.com/apache/geode/pull/385

[GEODE-2408] Refactor CacheableDate to use C++ std::chrono



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pivotal-jbarrett/geode feature/GEODE-2408

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #385






> Refactor CacheableDate to use C++ std::chrono
> -
>
> Key: GEODE-2408
> URL: https://issues.apache.org/jira/browse/GEODE-2408
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Jacob S. Barrett
>Assignee: Jacob S. Barrett
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #379: GEODE-2317: FindGeode searches GEODE_HOME environme...

2017-02-02 Thread pivotal-jbarrett
Github user pivotal-jbarrett closed the pull request at:

https://github.com/apache/geode/pull/379


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/
---

Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, Dan 
Smith, and xiaojian zhou.


Repository: geode


Description
---

Design key points:
==
1. The afterPrimary and afterSecondary calls from the chunk bucket pass through 
the same critical section
2. If the bucket is still primary it will attempt to acquire a Dlock on the 
bucket and create the index repo.
3. The primary call tries for 5 seconds to acquire the Dlock and then checks if 
it is still primary. If it is, it will retry to acquire the lock. If it turned 
into a secondary it will exit the critical section. It is assumed that the 
after secondary call will do the cleanup.
4. The afterSecondary call will simply do the clean up - close the writer, 
release locks, undo contributions to the lucene stats.
5. Also, in a situation when index creation fails during after primary, lucene 
query execution will try to recompute the index once again.


Diffs
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
 9e055f0 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 5be17e3 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 da0c2c2 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 0e8bb37 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
 2f87218 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 2f61913 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
 b503692 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
 e43b60b 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
 5271a2f 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
 1c47e89 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
 a9fb52b 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
 4204204 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
 d3b1f2f 

Diff: https://reviews.apache.org/r/56244/diff/


Testing
---

* Lucene precheck successfull
* Hydra failures are imminent - work ongoing on EOF + rebalance exceptions


Thanks,

nabarun nag



Re: Property-Based Testing for Geode

2017-02-02 Thread Dan Smith
+1 Cool!

-Dan

On Thu, Feb 2, 2017 at 1:21 PM, Galen M O'Sullivan 
wrote:

> Hi all,
>
> I would like to propose adding [junit-quickcheck](1) to Geode. It's named
> after the [Haskell tool](2) and functions more or less as automated testing
> for JUnit Theories (if anyone is familiar with those).
>
> Property-based testing means basically that you write a function that tests
> some code and checks some conditions you expect to hold true for all
> inputs, and then have a computer program test all sorts of weird inputs to
> try prove you wrong.
>
> Because the test data is randomly generated, you get to test against more
> inputs than you might even think of, and because the seed is saved, the
> test is reproducible. If junit-quickcheck finds one failure, it will even
> try to narrow down to a smallest example of a test failure.
>
> There are some limitations to this library -- for example, it doesn't tend
> to generate strings more than a few hundred characters in length (though
> this increases with sample size, so if you kick the sample size up, you can
> get into the thousands fairly quickly).
>
> [ScalaCheck](3) is another option that does much the same and seems to have
> more functionality (and in particular, it seems to be able to handle state
> in tests), but it requires Scala to run and tests are also written in Scala
> (though it can test Java code). I don't think there will be much support
> for including Scala as a dependency for Geode.
>
> I've put up a Review board request and PR:
> https://reviews.apache.org/r/56242/
> https://github.com/apache/geode/pull/383
>
>
> I'd like to hear the community's input.
>
> Thanks,
> Galen O'Sullivan
>
> [1]: http://pholser.github.io/junit-quickcheck/site/0.7/
> [2]: http://www.cse.chalmers.se/~rjmh/QuickCheck/manual.html
> [3]: http://www.scalacheck.org/index.html
>


Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread Jason Huynh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/#review164038
---




geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 (line 56)


Should we create a ticket to change this to a specific list of exceptions 
we may be expecting?



geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 (line 47)


Remove



geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 (line 54)


Remove


- Jason Huynh


On Feb. 2, 2017, 10:32 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56244/
> ---
> 
> (Updated Feb. 2, 2017, 10:32 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, 
> Dan Smith, and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Design key points:
> ==
> 1. The afterPrimary and afterSecondary calls from the chunk bucket pass 
> through the same critical section
> 2. If the bucket is still primary it will attempt to acquire a Dlock on the 
> bucket and create the index repo.
> 3. The primary call tries for 5 seconds to acquire the Dlock and then checks 
> if it is still primary. If it is, it will retry to acquire the lock. If it 
> turned into a secondary it will exit the critical section. It is assumed that 
> the after secondary call will do the cleanup.
> 4. The afterSecondary call will simply do the clean up - close the writer, 
> release locks, undo contributions to the lucene stats.
> 5. Also, in a situation when index creation fails during after primary, 
> lucene query execution will try to recompute the index once again.
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
>  9e055f0 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
>  5be17e3 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  da0c2c2 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
>  0e8bb37 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
>  2f87218 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
>  2f61913 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
>  b503692 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
>  e43b60b 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
>  5271a2f 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
>  1c47e89 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
>  a9fb52b 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
>  4204204 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
>  d3b1f2f 
> 
> Diff: https://reviews.apache.org/r/56244/diff/
> 
> 
> Testing
> ---
> 
> * Lucene precheck successfull
> * Hydra failures are imminent - work ongoing on EOF + rebalance exceptions
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



[GitHub] geode issue #353: GEODE-2269 update to allow region entries non null empty k...

2017-02-02 Thread upthewaterspout
Github user upthewaterspout commented on the issue:

https://github.com/apache/geode/pull/353
  
+1 I'll merge the non .gitignore changes. I don't think we should be 
ignoring directories named bin, we actually have some code checked in those 
directories.

I suspect maybe you are using eclipse? The default eclipse behavior is to 
create an output directory called bin. If so, you should use ./gradlew eclipse 
and then just import the generated eclipse project into your workspace. Not 
only will that fix the output directory but it will setup your classpath for 
you!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2269) It seems the gfsh "remove" command cannot remove r...

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850679#comment-15850679
 ] 

ASF GitHub Bot commented on GEODE-2269:
---

Github user upthewaterspout commented on the issue:

https://github.com/apache/geode/pull/353
  
+1 I'll merge the non .gitignore changes. I don't think we should be 
ignoring directories named bin, we actually have some code checked in those 
directories.

I suspect maybe you are using eclipse? The default eclipse behavior is to 
create an output directory called bin. If so, you should use ./gradlew eclipse 
and then just import the generated eclipse project into your workspace. Not 
only will that fix the output directory but it will setup your classpath for 
you!


> It seems the gfsh "remove" command cannot remove r...
> -
>
> Key: GEODE-2269
> URL: https://issues.apache.org/jira/browse/GEODE-2269
> Project: Geode
>  Issue Type: Improvement
>  Components: docs
>Reporter: Gregory Green
>
> It seems the gfsh "remove" command cannot remove region entries with a 0 
> length string key.
> gfsh>query --query="select toString().length() from /Recipient.keySet()"
> Result : true
> startCount : 0
> endCount   : 20
> Rows   : 3
> Result
> --
> 0
> 2
> 5
> gfsh>remove --region=/Recipient --key=""
> Message : Key is either empty or Null
> Result  : false
> gfsh>remove --region=/Recipient --key="''"
> Message : Key is either empty or Null
> Result  : false



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2420) Warn a user if they try to export too much data

2017-02-02 Thread Karen Smoler Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Smoler Miller updated GEODE-2420:
---
Component/s: docs

> Warn a user if they try to export too much data
> ---
>
> Key: GEODE-2420
> URL: https://issues.apache.org/jira/browse/GEODE-2420
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, docs, gfsh
>Reporter: Jared Stewart
>
> We should warn a user and prompt for confirmation before trying to perform an 
> `export logs` operation that would result in a file over some threshold.  
> (Logs and stats have the potential to be very large.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Property-Based Testing for Geode

2017-02-02 Thread Jared Stewart
 +1 Quickcheck looks like a great way to test for weird inputs!  I put a couple 
of minor comments in your PR.


> On Feb 2, 2017, at 2:40 PM, Dan Smith  wrote:
> 
> +1 Cool!
> 
> -Dan
> 
> On Thu, Feb 2, 2017 at 1:21 PM, Galen M O'Sullivan 
> wrote:
> 
>> Hi all,
>> 
>> I would like to propose adding [junit-quickcheck](1) to Geode. It's named
>> after the [Haskell tool](2) and functions more or less as automated testing
>> for JUnit Theories (if anyone is familiar with those).
>> 
>> Property-based testing means basically that you write a function that tests
>> some code and checks some conditions you expect to hold true for all
>> inputs, and then have a computer program test all sorts of weird inputs to
>> try prove you wrong.
>> 
>> Because the test data is randomly generated, you get to test against more
>> inputs than you might even think of, and because the seed is saved, the
>> test is reproducible. If junit-quickcheck finds one failure, it will even
>> try to narrow down to a smallest example of a test failure.
>> 
>> There are some limitations to this library -- for example, it doesn't tend
>> to generate strings more than a few hundred characters in length (though
>> this increases with sample size, so if you kick the sample size up, you can
>> get into the thousands fairly quickly).
>> 
>> [ScalaCheck](3) is another option that does much the same and seems to have
>> more functionality (and in particular, it seems to be able to handle state
>> in tests), but it requires Scala to run and tests are also written in Scala
>> (though it can test Java code). I don't think there will be much support
>> for including Scala as a dependency for Geode.
>> 
>> I've put up a Review board request and PR:
>> https://reviews.apache.org/r/56242/
>> https://github.com/apache/geode/pull/383
>> 
>> 
>> I'd like to hear the community's input.
>> 
>> Thanks,
>> Galen O'Sullivan
>> 
>> [1]: http://pholser.github.io/junit-quickcheck/site/0.7/
>> [2]: http://www.cse.chalmers.se/~rjmh/QuickCheck/manual.html
>> [3]: http://www.scalacheck.org/index.html
>> 



[GitHub] geode pull request #383: GEODE-2206: Add junit-quickcheck to geode-core.

2017-02-02 Thread jaredjstewart
Github user jaredjstewart commented on a diff in the pull request:

https://github.com/apache/geode/pull/383#discussion_r99239963
  
--- Diff: 
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 ---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal;
+
+import static org.junit.Assert.*;
+
+import com.pholser.junit.quickcheck.Property;
+import com.pholser.junit.quickcheck.runner.JUnitQuickcheck;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+
+import java.io.ByteArrayOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Tests the serialization and deserialization of randomly generated 
Strings.
+ *
+ * The current implementation (0.7 or 0.8alpha2) of 
junit-quickcheck-generators only generates valid
+ * codepoints, and that it doesn't tend to test strings that are 
particularly long, though the more
+ * trials you run, the longer they get.
+ */
+@Category(UnitTest.class)
+@RunWith(JUnitQuickcheck.class)
+public class InternalDataSerializerQuickcheckStringTest {
+  @Property(trials = 1000)
+  public void StringSerializedDeserializesToSameValue(String 
originalString) throws IOException {
+ByteArrayOutputStream byteArrayOutputStream = new 
ByteArrayOutputStream();
+DataOutputStream dataOutputStream = new 
DataOutputStream(byteArrayOutputStream);
+
+DataSerializer.writeString(originalString, dataOutputStream);
+dataOutputStream.flush();
+
+byte[] stringBytes = byteArrayOutputStream.toByteArray();
+DataInputStream dataInputStream = new DataInputStream(new 
ByteArrayInputStream(stringBytes));
+String returnedString = DataSerializer.readString(dataInputStream);
+
+assertEquals("Deserialized string matches original", originalString, 
returnedString);
--- End diff --

I think it might be helpful in the failure case to also log the bytes of 
the originalString since it might contain garbage/non-printable characters.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2206) Add junit-quickcheck to Gradle test dependencies.

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850682#comment-15850682
 ] 

ASF GitHub Bot commented on GEODE-2206:
---

Github user jaredjstewart commented on a diff in the pull request:

https://github.com/apache/geode/pull/383#discussion_r99239963
  
--- Diff: 
geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
 ---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal;
+
+import static org.junit.Assert.*;
+
+import com.pholser.junit.quickcheck.Property;
+import com.pholser.junit.quickcheck.runner.JUnitQuickcheck;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+
+import java.io.ByteArrayOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Tests the serialization and deserialization of randomly generated 
Strings.
+ *
+ * The current implementation (0.7 or 0.8alpha2) of 
junit-quickcheck-generators only generates valid
+ * codepoints, and that it doesn't tend to test strings that are 
particularly long, though the more
+ * trials you run, the longer they get.
+ */
+@Category(UnitTest.class)
+@RunWith(JUnitQuickcheck.class)
+public class InternalDataSerializerQuickcheckStringTest {
+  @Property(trials = 1000)
+  public void StringSerializedDeserializesToSameValue(String 
originalString) throws IOException {
+ByteArrayOutputStream byteArrayOutputStream = new 
ByteArrayOutputStream();
+DataOutputStream dataOutputStream = new 
DataOutputStream(byteArrayOutputStream);
+
+DataSerializer.writeString(originalString, dataOutputStream);
+dataOutputStream.flush();
+
+byte[] stringBytes = byteArrayOutputStream.toByteArray();
+DataInputStream dataInputStream = new DataInputStream(new 
ByteArrayInputStream(stringBytes));
+String returnedString = DataSerializer.readString(dataInputStream);
+
+assertEquals("Deserialized string matches original", originalString, 
returnedString);
--- End diff --

I think it might be helpful in the failure case to also log the bytes of 
the originalString since it might contain garbage/non-printable characters.


> Add junit-quickcheck to Gradle test dependencies.
> -
>
> Key: GEODE-2206
> URL: https://issues.apache.org/jira/browse/GEODE-2206
> Project: Geode
>  Issue Type: Improvement
>Reporter: Galen O'Sullivan
>Assignee: Galen O'Sullivan
>
> Unit tests allow us to test cases we know about and have thought of. 
> Property-based testing allows us to test those, and some cases we haven't 
> thought of -- you're essentially fuzzing a limited subset of the code. 
> {{junit-quickcheck}} makes it easy to write "property-based" tests with 
> generators for the builtin types. You can also constrain input or build 
> custom generators for constrained data.
> I think this would be especially helpful for testing areas like PDX 
> serialization, which should be able to accept any serializable object a user 
> creates.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2423) Remove unused keystore files

2017-02-02 Thread Michael Martell (JIRA)
Michael Martell created GEODE-2423:
--

 Summary: Remove unused keystore files
 Key: GEODE-2423
 URL: https://issues.apache.org/jira/browse/GEODE-2423
 Project: Geode
  Issue Type: Task
  Components: native client
Reporter: Michael Martell


This task is to remove unreferenced files in cppcache/integration-test/keystore 
folder.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56242: GEODE-2206: Add junit-quickcheck to geode-core; add a test that uses it.

2017-02-02 Thread Kirk Lund

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56242/#review164041
---


Ship it!




Ship It!

- Kirk Lund


On Feb. 2, 2017, 9:35 p.m., Galen O'Sullivan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56242/
> ---
> 
> (Updated Feb. 2, 2017, 9:35 p.m.)
> 
> 
> Review request for geode, Bruce Schuchardt, Hitesh Khamesra, Kirk Lund, and 
> Udo Kohlmeyer.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> This adds a test dependency on `junit-quickcheck` (and 
> `junit-quickcheck-generators` and `junit-quickcheck-guava`) to geode-core. 
> I've included an example test of one of the cases in which property-based 
> testing is particularly nice: when you have two operations that should 
> reverse each other and want to test them with as much garbage as possible.
> 
> Property-based testing means basically that you write a function that tests 
> some code and checks some conditions you expect to hold true for all inputs, 
> and then have a computer program test all sorts of weird inputs to try prove 
> you wrong.
> 
> Because the test data is randomly generated, you get to test against more 
> inputs than you might even think of, and because the seed is saved, the test 
> is reproducible. If `junit-quickcheck` finds one failure, it will even try to 
> narrow down to a smallest example of a test failure.
> 
> I'm about to send an email to the dev list soliciting feedback.
> 
> 
> Diffs
> -
> 
>   geode-core/build.gradle 3c2a2abf5 
>   
> geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerQuickcheckStringTest.java
>  PRE-CREATION 
>   
> geode-core/src/test/java/org/apache/geode/internal/InternalDataSerializerRandomizedJUnitTest.java
>  f361de4a2 
>   gradle/dependency-versions.properties fbc76e012 
> 
> Diff: https://reviews.apache.org/r/56242/diff/
> 
> 
> Testing
> ---
> 
> The test passes on my machine. This is mostly just adding a dependency, so 
> there's not a lot here to test.
> 
> I've read some of the source of junit-quickcheck and looked into the data it 
> generates: integral numbers seem pretty reasonable. Strings tend to be 
> short-ish (length up to hundreds with hundreds of iterations, thousands with 
> thousands), but are made up of random codepoints, which is nice.
> 
> 
> Thanks,
> 
> Galen O'Sullivan
> 
>



Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag


> On Feb. 2, 2017, 10:45 p.m., Jason Huynh wrote:
> > geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java,
> >  line 58
> > 
> >
> > Should we create a ticket to change this to a specific list of 
> > exceptions we may be expecting?

should this be similar to the behaviour of afterPrimary ?

try {
lucenePartitionRepositoryManager.computeRepository(bucketId);
  } catch (BucketNotFoundException e) {
logger.warn(
"Index repository could not be created when index chunk region 
bucket became primary. "
+ "Deferring index repository to be created lazily during 
lucene query execution."
+ e);
  }


- nabarun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/#review164038
---


On Feb. 2, 2017, 10:32 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56244/
> ---
> 
> (Updated Feb. 2, 2017, 10:32 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, 
> Dan Smith, and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Design key points:
> ==
> 1. The afterPrimary and afterSecondary calls from the chunk bucket pass 
> through the same critical section
> 2. If the bucket is still primary it will attempt to acquire a Dlock on the 
> bucket and create the index repo.
> 3. The primary call tries for 5 seconds to acquire the Dlock and then checks 
> if it is still primary. If it is, it will retry to acquire the lock. If it 
> turned into a secondary it will exit the critical section. It is assumed that 
> the after secondary call will do the cleanup.
> 4. The afterSecondary call will simply do the clean up - close the writer, 
> release locks, undo contributions to the lucene stats.
> 5. Also, in a situation when index creation fails during after primary, 
> lucene query execution will try to recompute the index once again.
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
>  9e055f0 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
>  5be17e3 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  da0c2c2 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
>  0e8bb37 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
>  2f87218 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
>  2f61913 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
>  b503692 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
>  e43b60b 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
>  5271a2f 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
>  1c47e89 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
>  a9fb52b 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
>  4204204 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
>  d3b1f2f 
> 
> Diff: https://reviews.apache.org/r/56244/diff/
> 
> 
> Testing
> ---
> 
> * Lucene precheck successfull
> * Hydra failures are imminent - work ongoing on EOF + rebalance exceptions
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/
---

(Updated Feb. 2, 2017, 10:55 p.m.)


Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, Dan 
Smith, and xiaojian zhou.


Changes
---

Applied Jason's review changes


Repository: geode


Description
---

Design key points:
==
1. The afterPrimary and afterSecondary calls from the chunk bucket pass through 
the same critical section
2. If the bucket is still primary it will attempt to acquire a Dlock on the 
bucket and create the index repo.
3. The primary call tries for 5 seconds to acquire the Dlock and then checks if 
it is still primary. If it is, it will retry to acquire the lock. If it turned 
into a secondary it will exit the critical section. It is assumed that the 
after secondary call will do the cleanup.
4. The afterSecondary call will simply do the clean up - close the writer, 
release locks, undo contributions to the lucene stats.
5. Also, in a situation when index creation fails during after primary, lucene 
query execution will try to recompute the index once again.


Diffs (updated)
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
 9e055f0 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 5be17e3 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 da0c2c2 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 0e8bb37 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
 2f87218 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 2f61913 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
 b503692 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
 e43b60b 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
 5271a2f 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
 1c47e89 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
 a9fb52b 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
 4204204 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
 d3b1f2f 

Diff: https://reviews.apache.org/r/56244/diff/


Testing
---

* Lucene precheck successfull
* Hydra failures are imminent - work ongoing on EOF + rebalance exceptions


Thanks,

nabarun nag



Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread Jason Huynh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/#review164043
---




geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 (line 56)


Probably not.  There can be more things that go wrong when closing (such as 
cache closed exceptions or exceptions for trying to write when no longer 
primary)


- Jason Huynh


On Feb. 2, 2017, 10:32 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56244/
> ---
> 
> (Updated Feb. 2, 2017, 10:32 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, 
> Dan Smith, and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Design key points:
> ==
> 1. The afterPrimary and afterSecondary calls from the chunk bucket pass 
> through the same critical section
> 2. If the bucket is still primary it will attempt to acquire a Dlock on the 
> bucket and create the index repo.
> 3. The primary call tries for 5 seconds to acquire the Dlock and then checks 
> if it is still primary. If it is, it will retry to acquire the lock. If it 
> turned into a secondary it will exit the critical section. It is assumed that 
> the after secondary call will do the cleanup.
> 4. The afterSecondary call will simply do the clean up - close the writer, 
> release locks, undo contributions to the lucene stats.
> 5. Also, in a situation when index creation fails during after primary, 
> lucene query execution will try to recompute the index once again.
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
>  9e055f0 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
>  5be17e3 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  da0c2c2 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
>  0e8bb37 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
>  2f87218 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
>  2f61913 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
>  b503692 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
>  e43b60b 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
>  5271a2f 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
>  1c47e89 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
>  a9fb52b 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
>  4204204 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
>  d3b1f2f 
> 
> Diff: https://reviews.apache.org/r/56244/diff/
> 
> 
> Testing
> ---
> 
> * Lucene precheck successfull
> * Hydra failures are imminent - work ongoing on EOF + rebalance exceptions
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread Dan Smith

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/#review164044
---


Fix it, then Ship it!





geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 (line 69)


I think this timeout should maybe be shorter - 100 ms or so. Otherwise a 
query could block for 5 seconds in rare cases.


- Dan Smith


On Feb. 2, 2017, 10:55 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56244/
> ---
> 
> (Updated Feb. 2, 2017, 10:55 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, 
> Dan Smith, and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Design key points:
> ==
> 1. The afterPrimary and afterSecondary calls from the chunk bucket pass 
> through the same critical section
> 2. If the bucket is still primary it will attempt to acquire a Dlock on the 
> bucket and create the index repo.
> 3. The primary call tries for 5 seconds to acquire the Dlock and then checks 
> if it is still primary. If it is, it will retry to acquire the lock. If it 
> turned into a secondary it will exit the critical section. It is assumed that 
> the after secondary call will do the cleanup.
> 4. The afterSecondary call will simply do the clean up - close the writer, 
> release locks, undo contributions to the lucene stats.
> 5. Also, in a situation when index creation fails during after primary, 
> lucene query execution will try to recompute the index once again.
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
>  9e055f0 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
>  5be17e3 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  da0c2c2 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
>  0e8bb37 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
>  2f87218 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
>  2f61913 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
>  b503692 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
>  e43b60b 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
>  5271a2f 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
>  1c47e89 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
>  a9fb52b 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
>  4204204 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
>  d3b1f2f 
> 
> Diff: https://reviews.apache.org/r/56244/diff/
> 
> 
> Testing
> ---
> 
> * Lucene precheck successfull
> * Hydra failures are imminent - work ongoing on EOF + rebalance exceptions
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



[Spring CI] Spring Data GemFire > Nightly-ApacheGeode > #459 has FAILED (12 tests failed, 1 failures were new)

2017-02-02 Thread Spring CI

---
Spring Data GemFire > Nightly-ApacheGeode > #459 failed.
---
Scheduled
12/1666 tests failed, 1 failure was new.

https://build.spring.io/browse/SGF-NAG-459/

-
Currently Responsible
-

John Blum 



--
Failing Jobs
--
  - Default Job (Default Stage): 12 of 1666 tests failed.




--
Tests
--
New Test Failures (1)
   - ApacheShiroRealmGeodeSecurityIntegrationTests: Authorized user
Existing Test Failures (11)
   - ClientSubRegionTest: Org.springframework.data.gemfire.client. client sub 
region test
   - GemFireDataSourceTest: Org.springframework.data.gemfire.client. gem fire 
data source test
   - GemFireDataSourceWithLocalRegionTest: 
Org.springframework.data.gemfire.client. gem fire data source with local region 
test
   - ContinuousQueryListenerContainerNamespaceTest: 
Org.springframework.data.gemfire.config.xml. continuous query listener 
container namespace test
   - ClientCacheFunctionExecutionWithPdxIntegrationTest: 
Org.springframework.data.gemfire.function. client cache function execution with 
pdx integration test
   - FunctionExecutionTests: 
Org.springframework.data.gemfire.function.execution. function execution tests
   - FunctionIntegrationTests: 
Org.springframework.data.gemfire.function.execution. function integration tests
   - GemfireFunctionTemplateTests: 
Org.springframework.data.gemfire.function.execution. gemfire function template 
tests
   - ListenerContainerTests: Org.springframework.data.gemfire.listener. 
listener container tests
   - ContainerXmlSetupTest: Org.springframework.data.gemfire.listener.adapter. 
container xml setup test
   - RepositoryClientRegionTests: 
Org.springframework.data.gemfire.repository.config. repository client region 
tests

--
This message is automatically generated by Atlassian Bamboo

[jira] [Commented] (GEODE-2372) OpExecutorImpl handleException method should print out the stacktrace if debugging was enabled

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850694#comment-15850694
 ] 

ASF subversion and git services commented on GEODE-2372:


Commit 868fcc8359d825565976b5774cf012f675ded51a in geode's branch 
refs/heads/develop from [~nnag]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=868fcc8 ]

GEODE-2372: handleException prints the stacktrace when debug enabled


> OpExecutorImpl handleException method should print out the stacktrace if 
> debugging was enabled 
> ---
>
> Key: GEODE-2372
> URL: https://issues.apache.org/jira/browse/GEODE-2372
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Reporter: nabarun
>Assignee: nabarun
>
> Printing out the stacktrace will help in debugging failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/
---

(Updated Feb. 2, 2017, 11:04 p.m.)


Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, Dan 
Smith, and xiaojian zhou.


Changes
---

Applied Dan's review changes


Repository: geode


Description
---

Design key points:
==
1. The afterPrimary and afterSecondary calls from the chunk bucket pass through 
the same critical section
2. If the bucket is still primary it will attempt to acquire a Dlock on the 
bucket and create the index repo.
3. The primary call tries for 5 seconds to acquire the Dlock and then checks if 
it is still primary. If it is, it will retry to acquire the lock. If it turned 
into a secondary it will exit the critical section. It is assumed that the 
after secondary call will do the cleanup.
4. The afterSecondary call will simply do the clean up - close the writer, 
release locks, undo contributions to the lucene stats.
5. Also, in a situation when index creation fails during after primary, lucene 
query execution will try to recompute the index once again.


Diffs (updated)
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
 9e055f0 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 5be17e3 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 da0c2c2 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 0e8bb37 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
 2f87218 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 2f61913 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
 b503692 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
 e43b60b 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
 5271a2f 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
 1c47e89 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
 a9fb52b 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
 4204204 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
 d3b1f2f 

Diff: https://reviews.apache.org/r/56244/diff/


Testing
---

* Lucene precheck successfull
* Hydra failures are imminent - work ongoing on EOF + rebalance exceptions


Thanks,

nabarun nag



Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/
---

(Updated Feb. 2, 2017, 11:07 p.m.)


Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, Dan 
Smith, and xiaojian zhou.


Repository: geode


Description
---

Design key points:
==
1. The afterPrimary and afterSecondary calls from the chunk bucket pass through 
the same critical section
2. If the bucket is still primary it will attempt to acquire a Dlock on the 
bucket and create the index repo.
3. The primary call tries for 5 seconds to acquire the Dlock and then checks if 
it is still primary. If it is, it will retry to acquire the lock. If it turned 
into a secondary it will exit the critical section. It is assumed that the 
after secondary call will do the cleanup.
4. The afterSecondary call will simply do the clean up - close the writer, 
release locks, undo contributions to the lucene stats.
5. Also, in a situation when index creation fails during after primary, lucene 
query execution will try to recompute the index once again.


Diffs
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
 9e055f0 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 5be17e3 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 da0c2c2 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 0e8bb37 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
 2f87218 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 2f61913 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
 b503692 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
 e43b60b 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
 5271a2f 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
 1c47e89 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
 a9fb52b 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
 4204204 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
 d3b1f2f 

Diff: https://reviews.apache.org/r/56244/diff/


Testing (updated)
---

* Lucene precheck successfull
* work ongoing on EOF + rebalance exceptions


Thanks,

nabarun nag



Re: geode git commit: GEODE-2421: Adding packer portion of making a VS2015 dev AMI

2017-02-02 Thread Mark Bretl
Hi,

How does/will this help the community?

--Mark

On Thu, Feb 2, 2017 at 2:25 PM,  wrote:

> Repository: geode
> Updated Branches:
>   refs/heads/next-gen-native-client-software-grant e79c4072b -> 340f2fca8
>
>
> GEODE-2421: Adding packer portion of making a VS2015 dev AMI
>
> This closes #384
>
>
> Project: http://git-wip-us.apache.org/repos/asf/geode/repo
> Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/340f2fca
> Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/340f2fca
> Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/340f2fca
>
> Branch: refs/heads/next-gen-native-client-software-grant
> Commit: 340f2fca80d9388155ed0911712f9a830211b32b
> Parents: e79c407
> Author: Ernest Burghardt 
> Authored: Thu Feb 2 14:03:10 2017 -0800
> Committer: Dan Smith 
> Committed: Thu Feb 2 14:24:20 2017 -0800
>
> --
>  packer/windows-2012-vs-2015.json | 64 +++
>  packer/windows/install-vs-2015-community.ps1 |  9 
>  2 files changed, 73 insertions(+)
> --
>
>
> http://git-wip-us.apache.org/repos/asf/geode/blob/340f2fca/p
> acker/windows-2012-vs-2015.json
> --
> diff --git a/packer/windows-2012-vs-2015.json
> b/packer/windows-2012-vs-2015.json
> new file mode 100644
> index 000..da82b94
> --- /dev/null
> +++ b/packer/windows-2012-vs-2015.json
> @@ -0,0 +1,64 @@
> +{
> +  "variables":{
> +"region":"us-west-2",
> +"source_ami":"ami-ac5395cc",
> +"source_image_name":"X.vmx",
> +"image_name":"windows-2012-vs-2015"
> +  },
> +  "builders":[
> +{
> +  "type":"amazon-ebs",
> +  "instance_type":"t2.large",
> +  "ami_name":"native-{{user `version`}}-{{user `image_name`}}
> {{timestamp}}",
> +  "access_key":"{{user `aws_access_key`}}",
> +  "secret_key":"{{user `aws_secret_key`}}",
> +  "region":"{{user `region`}}",
> +  "source_ami":"{{user `source_ami`}}",
> +  "subnet_id":"{{user `subnet_id`}}",
> +  "vpc_id":"{{user `vpc_id`}}",
> +  "tags":{
> +"team":"native",
> +"version":"{{user `version`}}",
> +"source_ami":"{{user `source_ami`}}"
> +  },
> +  "communicator":"winrm",
> +  "winrm_username":"Administrator",
> +  "launch_block_device_mappings":[
> +{
> +  "device_name":"/dev/sda1",
> +  "delete_on_termination":true,
> +  "volume_size":60
> +}
> +  ]
> +}
> +  ],
> +  "provisioners":[
> +{
> +  "pause_before":"30s",
> +  "type":"file",
> +  "source":"windows/Packer.psm1",
> +  "destination":"Documents/WindowsPowerShell/Modules/Packer/
> Packer.psm1"
> +},
> +{
> +  "type":"powershell",
> +  "scripts":[
> +"windows/install-vs-2015-community.ps1"
> +  ]
> +},
> +{
> +  "type":"powershell",
> +  "scripts":[
> +"windows/cleanup.ps1"
> +  ]
> +},
> +{
> +  "type":"powershell",
> +  "scripts":[
> +"windows/setup-ec2config.ps1"
> +  ],
> +  "only":[
> +"amazon-ebs"
> +  ]
> +}
> +  ]
> +}
>
> http://git-wip-us.apache.org/repos/asf/geode/blob/340f2fca/p
> acker/windows/install-vs-2015-community.ps1
> --
> diff --git a/packer/windows/install-vs-2015-community.ps1
> b/packer/windows/install-vs-2015-community.ps1
> new file mode 100644
> index 000..c175410
> --- /dev/null
> +++ b/packer/windows/install-vs-2015-community.ps1
> @@ -0,0 +1,9 @@
> +# TODO AdminDeploy.xml
> +# vs_community.exe /AdminFile C:\Users\Administrator\AdminDeployment.xml
> /Log setup.log /Passive
> +Set-PSDebug -Trace 0
> +
> +Import-Module Packer
> +
> +$log = "vs_community.log"
> +
> +choco install visualstudio2015community -confirm
>
>


Re: Review Request 56244: GEODE-2410 : afterPrimary and afterSecondary event listeners pass through the same critical section code

2017-02-02 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56244/
---

(Updated Feb. 2, 2017, 11:10 p.m.)


Review request for geode, Barry Oglesby, Jason Huynh, Lynn Hughes-Godfrey, Dan 
Smith, and xiaojian zhou.


Repository: geode


Description (updated)
---

Design key points:
==
1. The afterPrimary and afterSecondary calls from the chunk bucket pass through 
the same critical section
2. If the bucket is still primary it will attempt to acquire a Dlock on the 
bucket and create the index repo.
3. The primary call tries for 100 milliseconds to acquire the Dlock and then 
checks if it is still primary. If it is, it will retry to acquire the lock. If 
it turned into a secondary it will exit the critical section. It is assumed 
that the after secondary call will do the cleanup.
4. The afterSecondary call will simply do the clean up - close the writer, 
release locks, undo contributions to the lucene stats.
5. Also, in a situation when index creation fails during after primary, lucene 
query execution will try to recompute the index once again.


Diffs
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/AbstractPartitionedRepositoryManager.java
 9e055f0 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/IndexRepositoryFactory.java
 5be17e3 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 da0c2c2 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneQueryImpl.java
 0e8bb37 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManager.java
 2f87218 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawIndexRepositoryFactory.java
 2f61913 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManager.java
 b503692 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectory.java
 e43b60b 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/LuceneFunction.java
 5271a2f 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/PartitionedRepositoryManagerJUnitTest.java
 1c47e89 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/RawLuceneRepositoryManagerJUnitTest.java
 a9fb52b 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/internal/directory/RegionDirectoryJUnitTest.java
 4204204 
  
geode-lucene/src/test/java/org/apache/geode/cache/lucene/test/IndexRepositorySpy.java
 d3b1f2f 

Diff: https://reviews.apache.org/r/56244/diff/


Testing
---

* Lucene precheck successfull
* work ongoing on EOF + rebalance exceptions


Thanks,

nabarun nag



[GitHub] geode issue #315: GEODE-1995: Removed ReliableMessageQueue, ReliableMessageQ...

2017-02-02 Thread metatype
Github user metatype commented on the issue:

https://github.com/apache/geode/pull/315
  
@dschneider-pivotal what's next for this PR?  Is it good to merge?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (GEODE-2415) Write a function to return a zip file for a single server

2017-02-02 Thread Jared Stewart (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stewart reassigned GEODE-2415:


Assignee: Jared Stewart

> Write a function to return a zip file for a single server
> -
>
> Key: GEODE-2415
> URL: https://issues.apache.org/jira/browse/GEODE-2415
> Project: Geode
>  Issue Type: Sub-task
>  Components: configuration, gfsh
>Reporter: Jared Stewart
>Assignee: Jared Stewart
>
> We need to write a function to be executed on each server that will find the 
> desired artifacts (logs, stat files, stack traces) on that server given the 
> parameters of the export command (date limiting, --exclude-stats, etc) and 
> return that zip file to the calling locator using the mechanism determined by 
> GEODE-2414.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-1995) remove ReliableMessageQueueFactory, ReliableMessageQueue, and getReliableMessageQueueFactory

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850722#comment-15850722
 ] 

ASF GitHub Bot commented on GEODE-1995:
---

Github user metatype commented on the issue:

https://github.com/apache/geode/pull/315
  
@dschneider-pivotal what's next for this PR?  Is it good to merge?


> remove ReliableMessageQueueFactory, ReliableMessageQueue, and 
> getReliableMessageQueueFactory
> 
>
> Key: GEODE-1995
> URL: https://issues.apache.org/jira/browse/GEODE-1995
> Project: Geode
>  Issue Type: Improvement
>  Components: regions
>Reporter: Darrel Schneider
>Assignee: Avinash Dongre
>
> ReliableMessageQueueFactory, ReliableMessageQueue, and 
> GemFireCacheImpl.getReliableMessageQueueFactory should all be removed. They 
> are internal and were never used. No tests exist for them.
> They are part of required Roles which is a deprecated feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2410) afterPrimary and afterSecondary event listeners pass through the same critical section

2017-02-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850723#comment-15850723
 ] 

ASF subversion and git services commented on GEODE-2410:


Commit d2a626e9cecb4d9a1b3599201a379eefe0dc8556 in geode's branch 
refs/heads/develop from [~nnag]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=d2a626e ]

GEODE-2410: Lucene afterPrimary and afterSecondary calls pass through the same 
crit section.

* afterPrimary and afterSecondary calls are passed through the same 
critical section.
* If the caller is primary bucket, it will try to acquire a Dlock on 
the bucket and create the index repo.
* If it is secondary it will clean up the repo - close the writer and 
release the locks.
* If the primary changes to secondary while waiting for indexes to be 
created, it will exit from the critical section without acquiring the lock.


> afterPrimary and afterSecondary event listeners pass through the same 
> critical section
> --
>
> Key: GEODE-2410
> URL: https://issues.apache.org/jira/browse/GEODE-2410
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> * afterPrimary and afterSecondary listeners will call the same critical 
> section.
> * They will acquire a Dlock on the bucket and create the index if primary.
> * If they are secondary it will close the writer and release the Dlock.
> * The primary will reattempt to acquire the lock after 5seconds and continue 
> to loop as long as it is still primary.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode issue #326: Feature/geode 2103 : Adding --http-service-port and --http...

2017-02-02 Thread metatype
Github user metatype commented on the issue:

https://github.com/apache/geode/pull/326
  
IIRC, there's an integration test that should be updated with the new 
options.  Take a look at 
`geode-core/src/test/resources/org/apache/geode/management/internal/cli/commands/golden-help-offline.properties`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode issue #329: [GEODE-1887] #comment Fix for Issue #1887

2017-02-02 Thread metatype
Github user metatype commented on the issue:

https://github.com/apache/geode/pull/329
  
Please fix conflict, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-1887) Client PROXY region should delegate all operations to server

2017-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850729#comment-15850729
 ] 

ASF GitHub Bot commented on GEODE-1887:
---

Github user metatype commented on the issue:

https://github.com/apache/geode/pull/329
  
Please fix conflict, thanks!


> Client PROXY region should delegate all operations to server
> 
>
> Key: GEODE-1887
> URL: https://issues.apache.org/jira/browse/GEODE-1887
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Swapnil Bawaskar
>Assignee: Avinash Dongre
>
> Currently a ClientRegionShortcut.PROXY region sends operations like put() and 
> get() over to the server, but for operations like size() and isEmpty() it 
> just consults the local state on the client  and returns 0 and true 
> respectively, even though there may be data in the servers for that region.
> A PROXY region should not attempt to consult its local state for any 
> operation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >