[GitHub] geode-native pull request #11: GEODE-2486: Initialize OpenSSL for DEFAULT ci...

2017-02-15 Thread echobravopapa
Github user echobravopapa commented on a diff in the pull request:

https://github.com/apache/geode-native/pull/11#discussion_r101287739
  
--- Diff: src/cryptoimpl/SSLImpl.cpp ---
@@ -52,25 +52,23 @@ SSLImpl::SSLImpl(ACE_SOCKET sock, const char 
*pubkeyfile,
 
   if (SSLImpl::s_initialized == false) {
 ACE_SSL_Context *sslctx = ACE_SSL_Context::instance();
-SSL_CTX *opensslctx = sslctx->context();
 
-if (SSL_CTX_set_cipher_list(opensslctx, "eNULL:DEFAULT") == 0) {
-  // if it fails here error is caught at connect.
-}
-// sslctx->set_mode(ACE_SSL_Context::SSLv23_client);
+SSL_CTX_set_cipher_list(sslctx->context(), "DEFAULT");
+sslctx->set_mode(ACE_SSL_Context::SSLv23_client);
--- End diff --

+1 for setting the mode


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2486) SSL ciphers other than NULL not supported

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15867917#comment-15867917
 ] 

ASF GitHub Bot commented on GEODE-2486:
---

Github user echobravopapa commented on a diff in the pull request:

https://github.com/apache/geode-native/pull/11#discussion_r101287739
  
--- Diff: src/cryptoimpl/SSLImpl.cpp ---
@@ -52,25 +52,23 @@ SSLImpl::SSLImpl(ACE_SOCKET sock, const char 
*pubkeyfile,
 
   if (SSLImpl::s_initialized == false) {
 ACE_SSL_Context *sslctx = ACE_SSL_Context::instance();
-SSL_CTX *opensslctx = sslctx->context();
 
-if (SSL_CTX_set_cipher_list(opensslctx, "eNULL:DEFAULT") == 0) {
-  // if it fails here error is caught at connect.
-}
-// sslctx->set_mode(ACE_SSL_Context::SSLv23_client);
+SSL_CTX_set_cipher_list(sslctx->context(), "DEFAULT");
+sslctx->set_mode(ACE_SSL_Context::SSLv23_client);
--- End diff --

+1 for setting the mode


> SSL ciphers other than NULL not supported
> -
>
> Key: GEODE-2486
> URL: https://issues.apache.org/jira/browse/GEODE-2486
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob S. Barrett
>
> SSLImpl does not correctly initialize the OpenSSL library so ciphers other 
> than the NULL cipher can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Swapnil Bawaskar
GEODE-1887  was filed to
make sure that the user experience while using Geode is similar to RDBMS
and other data products out there. While reviewing the pull request
 I realized that we need to make
other operations propagate to the server as well. These include:
- invalidateRegion()
- destroyRegion()
- getSnapshotService()
- getEntry()
- keySet()
- values()
- isDestroyed()
- containsValueForKey()
- containsKey()
- containsValue()
- entrySet()

Also, rather than have a user "create" a PROXY region, which is just a
handle to a server side region, I would like to propose that
clientCache.getRegion("name") actually creates and returns a PROXY region
even if one was not created earlier/through cache.xml. So, in summary, the
workflow on the client would be:

ClientCacheFactory cacheFactory = new ClientCacheFactory();
cacheFactory.addPoolLocator("localhost", 10334);
ClientCache clientCache = cacheFactory.create();

Region students = clientCache.getRegion("students");
students.put("student1", "foo");
assert students.size() == 1;

If a client wants to have a near cache, they can still "create" a
CACHING_PROXY region.

For a CACHING_PROXY, I propose that we leave the default implementation
unchanged, i.e. all operations work locally on the client (except CRUD
operations that are always propagated to the server). In the case where the
client wishes to perform operations on the server, I propose that we
introduce a new method:

/**
 * @return
 */
Region serverView();

so that all operations on the returned view (Region) are performed on the
server.

In the longer term, we should break up Region into two interfaces, one that
has methods that only work on the client (like registerInterest and
serverView()) and other for the server.

Thanks!
Swapnil.


Fwd: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Gregory Green
Hello Hitesh,

The following is my feedback.

*1. Redis Type String*
  I like the idea of creating a region upfront. If we are still using the
convention that internal region names start with "__" , then I would
suggest something like a region named "__RedisString region"

*2. List Type*

I propose using a single partition region (ex: "__RedisList") for the List
commands.

Region> region;

//Note: ByteArrayWrapper is what the current RedisAdapter uses as its data
type. It converts strings to bytes using UTF8 encoding

Example Redis commands

RPUSH mylist A =>

 Region> region =
getRegion("__RedisList")
 List list = getOrCreateList(mylist);
 list.add(A)
 region.put(mylist,list)

*3. Hashes*

Based on my Spring Data Redis testing for Hash/object support.

HMSET and similar Hash commands are submitted in the following format:
HMSET region:key [field value]+ I proposed creating regions with the
following format:

Region> region;

Also see Hashes section at the following URL https://redis.io/topics/
data-types

Example Redis command:

HMSET companies:100 _class io.pivotal.redis.gemfire.example.repository.Company
id 100 name nylaInc email i...@nylainc.io website nylaInc.io taxID id:1
address.address1 address1 address.address2 address2 address.cityTown
cityTown address.stateProvince stateProvince address.zip zip
address.country country

=>

//Pseudo Access code
Region>
companiesRegion = getRegion("companies")
companiesRegion.put(100, toMap(fieldValues))

//--

// HGETALL region:key

HGETALL companies:100 =>

Region> companiesRegion = getRegion("companies")
return companiesRegion.get(100)

//HSET region:key field value

HSET companies:100 email upda...@pivotal.io =>

Region> companiesRegion = getRegion("companies");
Map map = companiesRegion.get(100)
map.set(email,upda...@pivotal.io)
companiesRegion.put(100,map);

FYI - I started to implement this and hope to submit a pull request soon
related to GEODE-2469.


*4. Set*

I propose using a single partition region (ex: __RedisSET) for the SET
commands.

Region> region;

Example Redis commands

SADD myset "Hello" =>

Region> region = getRegion("__RedisSET");
Set set = region(myset)
boolean bool = set.add(Hello)
if(bool){
  region.put(myset,set)
}
return bool;

SMEMBERS myset "Hello" =>

Region> region =
getRegion("_RedisSET");
Set set = region(myset)
return set.contains(Hello)s

FYI - I started to implement this and hope to submit a pull request soon
related to GEODE-2469.


*5. SortedSets *

I propose using a single partition region for the SET commands.

Region> region;

6. Default config for geode-region (vote)

I think the default setting should be partitioned with persistence and no
redundant copies.

7. It seems; redis knows type(list, hashes, string ,set ..) of each key...

I suggested most operations can assume all keys are strings in UTF8 byte
encoding, not sure if there are any mathematical number based Redis
commands that need numbers.

*8. Transactions:*

+1 I agree to not support transactions

*9. Redis COMMAND* (https://redis.io/commands/comman


+1 for implementing the "COMMAND"


-- Forwarded message --
From: Hitesh Khamesra 
Date: Tue, Feb 14, 2017 at 5:36 PM
Subject: GeodeRedisAdapter improvments/feedback
To: Geode , "u...@geode.apache.org" <
u...@geode.apache.org>


Current GeodeRedisAdapter implementation is based on
https://cwiki.apache.org/confluence/display/GEODE/Geode+Redi
s+Adapter+Proposal.
We are looking for some feedback on Redis commands and their mapping to
geode region.

1. Redis Type String
  a. Usage Set k1 v1
  b. Current implementation creates "STRING_REGION" geode-partition-region
upfront
  c. This k1/v1 are geode-region key/value
  d. Any feedback?

2. List Type
  a. usage "rpush mylist A"
  b. Current implementation maps each list to geode-partition-region(i.e.
mylist is geode-partition-region); with the ability to get item from
head/tail
  c. Feedback/vote
  -- List type operation at region-entry level;
  -- region-key = "mylist"
  -- region-value = Arraylist (will support all redis list ops)
  d. Feedback/vote: both behavior is desirable


3. Hashes
  a. this represents field-value or java bean object
  b. usage "hmset user1000 username antirez birthyear 1977 verified 1"
  c. Current implementation maps each hashes to geode-partition-region(i.e.
user1000 is geode-partition-region)
  d. Feedback/vote
-- Should we map hashes to region-entry
-- region-key = user1000
-- region-value = map
-- This will provide java bean sort to behaviour with 10s of field-value
-- Personally I would prefer this..
  e. Feedback/vote: both behaviour is desirable

4. Sets
  a. This represents unique keys in set
  b. usage "sadd myset 1 2 3"
  c. Current implementation maps each sadd to geode-partition-region(i.e.
myset is geode-partition-region)
  d. Feedback/vote
-- Should we map set to region-entry
-- region-key = myset
-- region-valu

Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Michael Stolz
I have strong fears that if we make these wholesale changes to existing
APIs we're going to end up breaking lots of existing code.

For instance, if we make destroyRegion propagate when it never did before,
we may end up destroying a server side region in production that wasn't
expected.

I will advocate for being more explicit about operations that are going to
be performed on the server.

The other fear I have is that if we make all of these server side
operations available to the Java client but not to the C++ and C# clients
we will once again be making our C++ and C# users feel orphaned.


--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: +1-631-835-4771

On Wed, Feb 15, 2017 at 9:44 AM, Swapnil Bawaskar 
wrote:

> GEODE-1887  was filed to
> make sure that the user experience while using Geode is similar to RDBMS
> and other data products out there. While reviewing the pull request
>  I realized that we need to make
> other operations propagate to the server as well. These include:
> - invalidateRegion()
> - destroyRegion()
> - getSnapshotService()
> - getEntry()
> - keySet()
> - values()
> - isDestroyed()
> - containsValueForKey()
> - containsKey()
> - containsValue()
> - entrySet()
>
> Also, rather than have a user "create" a PROXY region, which is just a
> handle to a server side region, I would like to propose that
> clientCache.getRegion("name") actually creates and returns a PROXY region
> even if one was not created earlier/through cache.xml. So, in summary, the
> workflow on the client would be:
>
> ClientCacheFactory cacheFactory = new ClientCacheFactory();
> cacheFactory.addPoolLocator("localhost", 10334);
> ClientCache clientCache = cacheFactory.create();
>
> Region students = clientCache.getRegion("students");
> students.put("student1", "foo");
> assert students.size() == 1;
>
> If a client wants to have a near cache, they can still "create" a
> CACHING_PROXY region.
>
> For a CACHING_PROXY, I propose that we leave the default implementation
> unchanged, i.e. all operations work locally on the client (except CRUD
> operations that are always propagated to the server). In the case where the
> client wishes to perform operations on the server, I propose that we
> introduce a new method:
>
> /**
>  * @return
>  */
> Region serverView();
>
> so that all operations on the returned view (Region) are performed on the
> server.
>
> In the longer term, we should break up Region into two interfaces, one that
> has methods that only work on the client (like registerInterest and
> serverView()) and other for the server.
>
> Thanks!
> Swapnil.
>


Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Apache Jenkins Server
See 

Changes:

[kmiller] GEODE-2479 Remove docs reference to gemstone.com package

[jiliao] GEODE-2474: mark NetstatDUnitTest as flaky

[jiliao] refactor ServerStarterRule and LocatorStarterRule so that they can be

[gzhou] GEODE-2471: fix the race condition in test code.

--
[...truncated 713 lines...]
:geode-cq:build
:geode-cq:distributedTest
:geode-cq:flakyTest
:geode-cq:integrationTest
:geode-json:assemble
:geode-json:compileTestJava UP-TO-DATE
:geode-json:processTestResources UP-TO-DATE
:geode-json:testClasses UP-TO-DATE
:geode-json:checkMissedTests UP-TO-DATE
:geode-json:spotlessJavaCheck
:geode-json:spotlessCheck
:geode-json:test UP-TO-DATE
:geode-json:check
:geode-json:build
:geode-json:distributedTest UP-TO-DATE
:geode-json:flakyTest UP-TO-DATE
:geode-json:integrationTest UP-TO-DATE
:geode-junit:javadoc
:geode-junit:javadocJar
:geode-junit:sourcesJar
:geode-junit:signArchives SKIPPED
:geode-junit:assemble
:geode-junit:compileTestJava
:geode-junit:processTestResources UP-TO-DATE
:geode-junit:testClasses
:geode-junit:checkMissedTests
:geode-junit:spotlessJavaCheck
:geode-junit:spotlessCheck
:geode-junit:test
:geode-junit:check
:geode-junit:build
:geode-junit:distributedTest
:geode-junit:flakyTest
:geode-junit:integrationTest
:geode-lucene:assemble
:geode-lucene:compileTestJava
Download 
https://repo1.maven.org/maven2/org/apache/lucene/lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
Download 
https://repo1.maven.org/maven2/org/apache/lucene/lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
Download 
https://repo1.maven.org/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.4.0/randomizedtesting-runner-2.4.0.pom
Download 
https://repo1.maven.org/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-parent/2.4.0/randomizedtesting-parent-2.4.0.pom
Download 
https://repo1.maven.org/maven2/org/apache/lucene/lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
Download 
https://repo1.maven.org/maven2/org/apache/lucene/lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
Download 
https://repo1.maven.org/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.4.0/randomizedtesting-runner-2.4.0.jar
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:geode-lucene:processTestResources
:geode-lucene:testClasses
:geode-lucene:checkMissedTests
:geode-lucene:spotlessJavaCheck
:geode-lucene:spotlessCheck
:geode-lucene:test
:geode-lucene:check
:geode-lucene:build
:geode-lucene:distributedTest
:geode-lucene:flakyTest
:geode-lucene:integrationTest
:geode-old-client-support:assemble
:geode-old-client-support:compileTestJava
:geode-old-client-support:processTestResources UP-TO-DATE
:geode-old-client-support:testClasses
:geode-old-client-support:checkMissedTests
:geode-old-client-support:spotlessJavaCheck
:geode-old-client-support:spotlessCheck
:geode-old-client-support:test
:geode-old-client-support:check
:geode-old-client-support:build
:geode-old-client-support:distributedTest
:geode-old-client-support:flakyTest
:geode-old-client-support:integrationTest
:geode-old-versions:javadoc UP-TO-DATE
:geode-old-versions:javadocJar
:geode-old-versions:sourcesJar
:geode-old-versions:signArchives SKIPPED
:geode-old-versions:assemble
:geode-old-versions:compileTestJava UP-TO-DATE
:geode-old-versions:processTestResources UP-TO-DATE
:geode-old-versions:testClasses UP-TO-DATE
:geode-old-versions:checkMissedTests UP-TO-DATE
:geode-old-versions:spotlessJavaCheck
:geode-old-versions:spotlessCheck
:geode-old-versions:test UP-TO-DATE
:geode-old-versions:check
:geode-old-versions:build
:geode-old-versions:distributedTest UP-TO-DATE
:geode-old-versions:flakyTest UP-TO-DATE
:geode-old-versions:integrationTest UP-TO-DATE
:geode-pulse:assemble
:geode-pulse:compileTestJava
Download 
https://repo1.maven.org/maven2/com/codeborne/phantomjsdriver/1.3.0/phantomjsdriver-1.3.0.pom
Download 
https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-api/3.0.1/selenium-api-3.0.1.pom
Download 
https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-remote-driver/3.0.1/selenium-remote-driver-3.0.1.pom
Download 
https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-support/3.0.1/selenium-support-3.0.1.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpmime/4.5.2/httpmime-4.5.2.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcomponents-core/4.4.4/httpcomponents-core-4.4.4.pom
Download 
https://repo1.maven.org/maven2/cglib/cglib-nodep/3.2.4/cglib-nodep-3.2.4.pom
Download 
https://repo1.maven.org/maven2/com/codeborne/phantomjsdriver/1.3.0/phantomjsdriver-1.3.0.jar
Download 
https://repo

Re: [DISCUSS] JIRA guidelines

2017-02-15 Thread Michael William Dodge
+1

> On 14 Feb, 2017, at 20:50, William Markito Oliveira 
>  wrote:
> 
> +1 
> 
> Finally!! ;)
> 
> Sent from my iPhone
> 
>> On Feb 14, 2017, at 7:59 PM, Galen M O'Sullivan  
>> wrote:
>> 
>> +1 to the article and removing the draft label
>> 
>>> On Tue, Feb 14, 2017 at 4:05 PM, Akihiro Kitada  wrote:
>>> 
>>> I agree!
>>> 
>>> 
>>> --
>>> Akihiro Kitada  |  Staff Customer Engineer |  +81 80 3716 3736
>>> Support.Pivotal.io   |  Mon-Fri  9:00am to
>>> 5:30pm JST  |  1-877-477-2269
>>> [image: support]  [image: twitter]
>>>  [image: linkedin]
>>>  [image: facebook]
>>>  [image: google plus]
>>>  [image: youtube]
>>> 
>>> 
>>> 
>>> 2017-02-15 8:47 GMT+09:00 Dan Smith :
>>> 
 We have this draft of JIRA guidelines sitting on the wiki. I updated it
 slightly. Can we agree on these guidelines and remove the draft label? Is
 there more that needs to be here?
 
 https://cwiki.apache.org/confluence/pages/viewpage.
>>> action?pageId=57311462
 
 -Dan
 
>>> 



Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Michael William Dodge
I agree with Mike that whatever changes in behavior are made to the Java client 
library should also be made in the C++ and C# libraries.

Sarge

> On 15 Feb, 2017, at 08:02, Michael Stolz  wrote:
> 
> I have strong fears that if we make these wholesale changes to existing
> APIs we're going to end up breaking lots of existing code.
> 
> For instance, if we make destroyRegion propagate when it never did before,
> we may end up destroying a server side region in production that wasn't
> expected.
> 
> I will advocate for being more explicit about operations that are going to
> be performed on the server.
> 
> The other fear I have is that if we make all of these server side
> operations available to the Java client but not to the C++ and C# clients
> we will once again be making our C++ and C# users feel orphaned.
> 
> 
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771
> 
> On Wed, Feb 15, 2017 at 9:44 AM, Swapnil Bawaskar 
> wrote:
> 
>> GEODE-1887  was filed to
>> make sure that the user experience while using Geode is similar to RDBMS
>> and other data products out there. While reviewing the pull request
>>  I realized that we need to make
>> other operations propagate to the server as well. These include:
>> - invalidateRegion()
>> - destroyRegion()
>> - getSnapshotService()
>> - getEntry()
>> - keySet()
>> - values()
>> - isDestroyed()
>> - containsValueForKey()
>> - containsKey()
>> - containsValue()
>> - entrySet()
>> 
>> Also, rather than have a user "create" a PROXY region, which is just a
>> handle to a server side region, I would like to propose that
>> clientCache.getRegion("name") actually creates and returns a PROXY region
>> even if one was not created earlier/through cache.xml. So, in summary, the
>> workflow on the client would be:
>> 
>> ClientCacheFactory cacheFactory = new ClientCacheFactory();
>> cacheFactory.addPoolLocator("localhost", 10334);
>> ClientCache clientCache = cacheFactory.create();
>> 
>> Region students = clientCache.getRegion("students");
>> students.put("student1", "foo");
>> assert students.size() == 1;
>> 
>> If a client wants to have a near cache, they can still "create" a
>> CACHING_PROXY region.
>> 
>> For a CACHING_PROXY, I propose that we leave the default implementation
>> unchanged, i.e. all operations work locally on the client (except CRUD
>> operations that are always propagated to the server). In the case where the
>> client wishes to perform operations on the server, I propose that we
>> introduce a new method:
>> 
>> /**
>> * @return
>> */
>> Region serverView();
>> 
>> so that all operations on the returned view (Region) are performed on the
>> server.
>> 
>> In the longer term, we should break up Region into two interfaces, one that
>> has methods that only work on the client (like registerInterest and
>> serverView()) and other for the server.
>> 
>> Thanks!
>> Swapnil.
>> 



Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread John Blum
I agree with what Mike said.  Plus, I don't think this is as simple as it
appears.

For instance, if I ...

Region students = clientCache.getRegion("Students");

What happens when the "Students" Region does not exist server-side?

This would require a check to determine whether the Students Region
actually existed on the server first before creating the PROXY Region on
the client.  If this check were not performed and the "Students" Region did
not exist on the server, then a user would not know about this fact until
they performed a Region operation.

For example in

package ...;

import ...;

public class NativeClientApp {

  public static void main(String[] args) {
ClientCache gemfireCache = new ClientCacheFactory()
  .set("name", NativeClientApp.class.getSimpleName())
  .set("log-level", "config")
  .addPoolServer("localhost", 40404)
  .create();

ClientRegionFactory exampleProxyRegionFactory =
  gemfireCache.createClientRegionFactory(ClientRegionShortcut.PROXY);

Region exampleProxyRegion =
exampleProxyRegionFactory.create("Example");

*exampleProxyRegion.put("keyOne", "valueOne");*

assertThat(exampleProxyRegion.get("keyOne")).isEqualTo("valueOne");
  }
}

Without starting a server, this program fails on the Region.put(..) (i.e.
exampleProxyRegion.put("keyOne", "valueOne");), and NOT when the Region is
created.  This is unfortunate, since it does not "*fail-fast*", which could
potentially occur much later in the application lifecycle when it is least
expected!

Therefore, in certain cases, the following...

Region students = clientCache.getRegion("Students");

Will work when the "Students" Region does in fact exist on the server, and
in other cases will fail once an operation is performed.

Additionally, I think it would be less than clear when this Region created
by clientCache.getRegion("name") is going to the server and when it does
not, or whether it even created a Region or got another Region defined in
say client-cache.xml, which could be LOCAL.

All and all, I think overloading the API in this way is very confusing and
wrong.

$0.02

-John




On Wed, Feb 15, 2017 at 8:02 AM, Michael Stolz  wrote:

> I have strong fears that if we make these wholesale changes to existing
> APIs we're going to end up breaking lots of existing code.
>
> For instance, if we make destroyRegion propagate when it never did before,
> we may end up destroying a server side region in production that wasn't
> expected.
>
> I will advocate for being more explicit about operations that are going to
> be performed on the server.
>
> The other fear I have is that if we make all of these server side
> operations available to the Java client but not to the C++ and C# clients
> we will once again be making our C++ and C# users feel orphaned.
>
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771
>
> On Wed, Feb 15, 2017 at 9:44 AM, Swapnil Bawaskar 
> wrote:
>
> > GEODE-1887  was filed
> to
> > make sure that the user experience while using Geode is similar to RDBMS
> > and other data products out there. While reviewing the pull request
> >  I realized that we need to
> make
> > other operations propagate to the server as well. These include:
> > - invalidateRegion()
> > - destroyRegion()
> > - getSnapshotService()
> > - getEntry()
> > - keySet()
> > - values()
> > - isDestroyed()
> > - containsValueForKey()
> > - containsKey()
> > - containsValue()
> > - entrySet()
> >
> > Also, rather than have a user "create" a PROXY region, which is just a
> > handle to a server side region, I would like to propose that
> > clientCache.getRegion("name") actually creates and returns a PROXY region
> > even if one was not created earlier/through cache.xml. So, in summary,
> the
> > workflow on the client would be:
> >
> > ClientCacheFactory cacheFactory = new ClientCacheFactory();
> > cacheFactory.addPoolLocator("localhost", 10334);
> > ClientCache clientCache = cacheFactory.create();
> >
> > Region students = clientCache.getRegion("students");
> > students.put("student1", "foo");
> > assert students.size() == 1;
> >
> > If a client wants to have a near cache, they can still "create" a
> > CACHING_PROXY region.
> >
> > For a CACHING_PROXY, I propose that we leave the default implementation
> > unchanged, i.e. all operations work locally on the client (except CRUD
> > operations that are always propagated to the server). In the case where
> the
> > client wishes to perform operations on the server, I propose that we
> > introduce a new method:
> >
> > /**
> >  * @return
> >  */
> > Region serverView();
> >
> > so that all operations on the returned view (Region) are performed on the
> > server.
> >
> > In the longer term, we should break up Region into two interfaces, one
> that
> > has methods that only work on the client (like registerInterest and
> > serverView()) and othe

Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Galen M O'Sullivan
I don't seem to have access to see the build report. Is that restricted to
committers only?

Thanks,
Galen

On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
>
> [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
>
> [jiliao] refactor ServerStarterRule and LocatorStarterRule so that they
> can be
>
> [gzhou] GEODE-2471: fix the race condition in test code.
>
> --
> [...truncated 713 lines...]
> :geode-cq:build
> :geode-cq:distributedTest
> :geode-cq:flakyTest
> :geode-cq:integrationTest
> :geode-json:assemble
> :geode-json:compileTestJava UP-TO-DATE
> :geode-json:processTestResources UP-TO-DATE
> :geode-json:testClasses UP-TO-DATE
> :geode-json:checkMissedTests UP-TO-DATE
> :geode-json:spotlessJavaCheck
> :geode-json:spotlessCheck
> :geode-json:test UP-TO-DATE
> :geode-json:check
> :geode-json:build
> :geode-json:distributedTest UP-TO-DATE
> :geode-json:flakyTest UP-TO-DATE
> :geode-json:integrationTest UP-TO-DATE
> :geode-junit:javadoc
> :geode-junit:javadocJar
> :geode-junit:sourcesJar
> :geode-junit:signArchives SKIPPED
> :geode-junit:assemble
> :geode-junit:compileTestJava
> :geode-junit:processTestResources UP-TO-DATE
> :geode-junit:testClasses
> :geode-junit:checkMissedTests
> :geode-junit:spotlessJavaCheck
> :geode-junit:spotlessCheck
> :geode-junit:test
> :geode-junit:check
> :geode-junit:build
> :geode-junit:distributedTest
> :geode-junit:flakyTest
> :geode-junit:integrationTest
> :geode-lucene:assemble
> :geode-lucene:compileTestJava
> Download https://repo1.maven.org/maven2/org/apache/lucene/
> lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> Download https://repo1.maven.org/maven2/org/apache/lucene/
> lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> Download https://repo1.maven.org/maven2/com/carrotsearch/
> randomizedtesting/randomizedtesting-runner/2.4.
> 0/randomizedtesting-runner-2.4.0.pom
> Download https://repo1.maven.org/maven2/com/carrotsearch/
> randomizedtesting/randomizedtesting-parent/2.4.
> 0/randomizedtesting-parent-2.4.0.pom
> Download https://repo1.maven.org/maven2/org/apache/lucene/
> lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> Download https://repo1.maven.org/maven2/org/apache/lucene/
> lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> Download https://repo1.maven.org/maven2/com/carrotsearch/
> randomizedtesting/randomizedtesting-runner/2.4.
> 0/randomizedtesting-runner-2.4.0.jar
> Note: Some input files use or override a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> Note: Some input files use unchecked or unsafe operations.
> Note: Recompile with -Xlint:unchecked for details.
> :geode-lucene:processTestResources
> :geode-lucene:testClasses
> :geode-lucene:checkMissedTests
> :geode-lucene:spotlessJavaCheck
> :geode-lucene:spotlessCheck
> :geode-lucene:test
> :geode-lucene:check
> :geode-lucene:build
> :geode-lucene:distributedTest
> :geode-lucene:flakyTest
> :geode-lucene:integrationTest
> :geode-old-client-support:assemble
> :geode-old-client-support:compileTestJava
> :geode-old-client-support:processTestResources UP-TO-DATE
> :geode-old-client-support:testClasses
> :geode-old-client-support:checkMissedTests
> :geode-old-client-support:spotlessJavaCheck
> :geode-old-client-support:spotlessCheck
> :geode-old-client-support:test
> :geode-old-client-support:check
> :geode-old-client-support:build
> :geode-old-client-support:distributedTest
> :geode-old-client-support:flakyTest
> :geode-old-client-support:integrationTest
> :geode-old-versions:javadoc UP-TO-DATE
> :geode-old-versions:javadocJar
> :geode-old-versions:sourcesJar
> :geode-old-versions:signArchives SKIPPED
> :geode-old-versions:assemble
> :geode-old-versions:compileTestJava UP-TO-DATE
> :geode-old-versions:processTestResources UP-TO-DATE
> :geode-old-versions:testClasses UP-TO-DATE
> :geode-old-versions:checkMissedTests UP-TO-DATE
> :geode-old-versions:spotlessJavaCheck
> :geode-old-versions:spotlessCheck
> :geode-old-versions:test UP-TO-DATE
> :geode-old-versions:check
> :geode-old-versions:build
> :geode-old-versions:distributedTest UP-TO-DATE
> :geode-old-versions:flakyTest UP-TO-DATE
> :geode-old-versions:integrationTest UP-TO-DATE
> :geode-pulse:assemble
> :geode-pulse:compileTestJava
> Download https://repo1.maven.org/maven2/com/codeborne/
> phantomjsdriver/1.3.0/phantomjsdriver-1.3.0.pom
> Download https://repo1.maven.org/maven2/org/seleniumhq/
> selenium/selenium-api/3.0.1/selenium-api-3.0.1.pom
> Download https://repo1.maven.org/maven2/org/seleniumhq/
> selenium/selenium-remote-driver/3.0.1/selenium-remote-driver-3.0.1.pom
> Download https://repo1.maven.org/maven2/org/seleniumhq/
> selenium/selenium-support/3.0.1/selenium-support-3.0.1.pom
> Download https://repo1.maven.org/maven2/org/apache/
> httpcomponents/httpmime/4.5.2/httpmime

Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Eric Shu
John,

The proposed solution is actually try to solve the situation you mentioned
-- create a proxy Region failed silently on a client when region does not
reside on the server.

Region students = clientCache.getRegion("Students");

The getRegion will check on the server side and it should fail with
RegionDestroyedException if a region is not reside on a server.

As to Mike's concern. Shall we throw UnsupportedOperationException for
region operation on PROXY? Based on current finding, it seems
invalidateRegion and destroyRegion on PROXY most likely are not used by
users.

Regards,
Eric


On Wed, Feb 15, 2017 at 8:31 AM, Michael William Dodge 
wrote:

> I agree with Mike that whatever changes in behavior are made to the Java
> client library should also be made in the C++ and C# libraries.
>
> Sarge
>
> > On 15 Feb, 2017, at 08:02, Michael Stolz  wrote:
> >
> > I have strong fears that if we make these wholesale changes to existing
> > APIs we're going to end up breaking lots of existing code.
> >
> > For instance, if we make destroyRegion propagate when it never did
> before,
> > we may end up destroying a server side region in production that wasn't
> > expected.
> >
> > I will advocate for being more explicit about operations that are going
> to
> > be performed on the server.
> >
> > The other fear I have is that if we make all of these server side
> > operations available to the Java client but not to the C++ and C# clients
> > we will once again be making our C++ and C# users feel orphaned.
> >
> >
> > --
> > Mike Stolz
> > Principal Engineer, GemFire Product Manager
> > Mobile: +1-631-835-4771
> >
> > On Wed, Feb 15, 2017 at 9:44 AM, Swapnil Bawaskar 
> > wrote:
> >
> >> GEODE-1887  was
> filed to
> >> make sure that the user experience while using Geode is similar to RDBMS
> >> and other data products out there. While reviewing the pull request
> >>  I realized that we need to
> make
> >> other operations propagate to the server as well. These include:
> >> - invalidateRegion()
> >> - destroyRegion()
> >> - getSnapshotService()
> >> - getEntry()
> >> - keySet()
> >> - values()
> >> - isDestroyed()
> >> - containsValueForKey()
> >> - containsKey()
> >> - containsValue()
> >> - entrySet()
> >>
> >> Also, rather than have a user "create" a PROXY region, which is just a
> >> handle to a server side region, I would like to propose that
> >> clientCache.getRegion("name") actually creates and returns a PROXY
> region
> >> even if one was not created earlier/through cache.xml. So, in summary,
> the
> >> workflow on the client would be:
> >>
> >> ClientCacheFactory cacheFactory = new ClientCacheFactory();
> >> cacheFactory.addPoolLocator("localhost", 10334);
> >> ClientCache clientCache = cacheFactory.create();
> >>
> >> Region students = clientCache.getRegion("students");
> >> students.put("student1", "foo");
> >> assert students.size() == 1;
> >>
> >> If a client wants to have a near cache, they can still "create" a
> >> CACHING_PROXY region.
> >>
> >> For a CACHING_PROXY, I propose that we leave the default implementation
> >> unchanged, i.e. all operations work locally on the client (except CRUD
> >> operations that are always propagated to the server). In the case where
> the
> >> client wishes to perform operations on the server, I propose that we
> >> introduce a new method:
> >>
> >> /**
> >> * @return
> >> */
> >> Region serverView();
> >>
> >> so that all operations on the returned view (Region) are performed on
> the
> >> server.
> >>
> >> In the longer term, we should break up Region into two interfaces, one
> that
> >> has methods that only work on the client (like registerInterest and
> >> serverView()) and other for the server.
> >>
> >> Thanks!
> >> Swapnil.
> >>
>
>


Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Anthony Baker
Introducing an API like this gives us the opportunity to split the 
client/server region API’s.  I don’t think we should return Region, but 
something specific to “server view”.  How would those API’s operate on a 
CACHING_PROXY?

Anthony

> On Feb 15, 2017, at 6:44 AM, Swapnil Bawaskar  wrote:
> 
> /**
> * @return
> */
> Region serverView();
> 



[jira] [Resolved] (GEODE-2442) Address link breaking

2017-02-15 Thread Ernest Burghardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ernest Burghardt resolved GEODE-2442.
-
Resolution: Invalid

> Address link breaking
> -
>
> Key: GEODE-2442
> URL: https://issues.apache.org/jira/browse/GEODE-2442
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Ernest Burghardt
>
> 9.1 breaking changes
> Link Breaking
> Dropping deprecated types and functions. Only breaks if still in customer 
> code despite years of deprecation.
> Quickstarts should be a good test of linkage issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56633: GEODE-2474: refactor code to use SystemUtils to read OS system props

2017-02-15 Thread Kirk Lund


On Feb. 14, 2017, 4:33 p.m., Kirk Lund wrote:
> > Everything else looks great.

It happens from following TDD. I write the test first and then write the code. 
This is an example where the results of TDD produce a test with questionable 
value, but if the tests are very fast (and they are) then it's better to have 
more tests than fewer. In particular, our project has a history of bad code and 
too few unit tests, so it's a good habit to follow for Geode.


- Kirk


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56633/#review165524
---


On Feb. 14, 2017, 2:31 a.m., Kirk Lund wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56633/
> ---
> 
> (Updated Feb. 14, 2017, 2:31 a.m.)
> 
> 
> Review request for geode, Jinmei Liao, Jared Stewart, Kevin Duling, and Ken 
> Howe.
> 
> 
> Bugs: GEODE-2474
> https://issues.apache.org/jira/browse/GEODE-2474
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> GEODE-2474: refactor code to use SystemUtils to read OS system props
> 
> Centralize OS system property reading in SystemUtils.
> 
> Refactor NetstatFunction and GemFireVersion to use SystemUtils.
> 
> This fixes use of --with-lsof on Mac (manually tested). I'll add new tests to 
> NetstatDUnitTest and a new integration test for netstat command in a 
> follow-up commit & review.
> 
> I have several changes on feature/GEODE-2474 branch which I'll separate into 
> multiple reviews. I'll do a final precheckin on the entire branch and then 
> merge the changes in after everything passes review and precheckin.
> 
> 
> Diffs
> -
> 
>   geode-core/src/main/java/org/apache/geode/internal/GemFireVersion.java 
> 26d4fb3c5705bffdcdbbc6c261dbe9ffd297642e 
>   geode-core/src/main/java/org/apache/geode/internal/lang/SystemUtils.java 
> 66c158c93fecac4feb2da56f617f5efc7bba56e1 
>   
> geode-core/src/main/java/org/apache/geode/management/internal/cli/functions/NetstatFunction.java
>  5fa30f47187972a209a781eae9024957dc80fb72 
>   
> geode-core/src/test/java/org/apache/geode/internal/lang/SystemUtilsJUnitTest.java
>  48f176eabc18d3ffa56daaa7da12634a9554f39d 
> 
> Diff: https://reviews.apache.org/r/56633/diff/
> 
> 
> Testing
> ---
> 
> SystemUtilsJUnitTest
> NetstatDUnitTest
> 
> 
> Thanks,
> 
> Kirk Lund
> 
>



Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread John Blum
@Eric-

Hmm...

Well, I'd argue that it is still confusing to "*overload*" the purpose of
getRegion("path") to dually "*get*" (the primary function/purpose) and also
"*create*" (secondary).

I'd also say that the getRegion("path") API call is not exclusive to a
*ClientCache*, particularly since getRegion("path") is on RegionService

[1],
which both ClientCache and Cache implement, indirectly through GemFireCache,
I might add.  Therefore, getRegion("path") has a completely different
meaning server-side (or in the embedded "peer cache" UC).

-j

[1]
http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionService.html#getRegion(java.lang.String)

On Wed, Feb 15, 2017 at 9:29 AM, Anthony Baker  wrote:

> Introducing an API like this gives us the opportunity to split the
> client/server region API’s.  I don’t think we should return Region, but
> something specific to “server view”.  How would those API’s operate on a
> CACHING_PROXY?
>
> Anthony
>
> > On Feb 15, 2017, at 6:44 AM, Swapnil Bawaskar 
> wrote:
> >
> > /**
> > * @return
> > */
> > Region serverView();
> >
>
>


-- 
-John
john.blum10101 (skype)


Re: Review Request 56668: GEODE-2474: mark NetstatDUnitTest as flaky

2017-02-15 Thread Kevin Duling

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56668/#review165730
---


Ship it!




Ship It!

- Kevin Duling


On Feb. 14, 2017, 9:01 a.m., Jinmei Liao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56668/
> ---
> 
> (Updated Feb. 14, 2017, 9:01 a.m.)
> 
> 
> Review request for geode, Jared Stewart, Kevin Duling, Ken Howe, and Kirk 
> Lund.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> GEODE-2474: mark NetstatDUnitTest as flaky
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/test/java/org/apache/geode/management/internal/cli/NetstatDUnitTest.java
>  3ee0c46675a250db4db2b8558c5cee3cf1e5eea8 
> 
> Diff: https://reviews.apache.org/r/56668/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jinmei Liao
> 
>



Re: Review Request 56637: refactor ServerStarterRule and LocatorStarterRule so that they can be created without a Properties first.

2017-02-15 Thread Kevin Duling

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56637/#review165731
---


Ship it!




Ship It!

- Kevin Duling


On Feb. 14, 2017, 11 a.m., Jinmei Liao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56637/
> ---
> 
> (Updated Feb. 14, 2017, 11 a.m.)
> 
> 
> Review request for geode, Jared Stewart, Kevin Duling, Ken Howe, and Kirk 
> Lund.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> refactor ServerStarterRule and LocatorStarterRule so that they can be created 
> without a Properties first.
> 
> * this would ease the usage of thsee rules, so that it can always be used as 
> a rule and users don't have to worry about stopping the locator/server 
> manually.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/test/java/org/apache/geode/test/dunit/rules/ServerStarterRule.java
>  f93498fcbcea0ec8d8f0b91a0367e9b6fb7d4ae1 
> 
> Diff: https://reviews.apache.org/r/56637/diff/
> 
> 
> Testing
> ---
> 
> precheckin successful
> 
> This code change will also fix the currently failing integration tests. See 
> the change in MemberMBeanSecurityJUnitTest.java
> 
> 
> Thanks,
> 
> Jinmei Liao
> 
>



Re: Review Request 56635: GEODE-2481: extract default properties generation to its own class

2017-02-15 Thread Kevin Duling

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56635/#review165732
---


Ship it!




Ship It!

- Kevin Duling


On Feb. 13, 2017, 6:33 p.m., Kirk Lund wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56635/
> ---
> 
> (Updated Feb. 13, 2017, 6:33 p.m.)
> 
> 
> Review request for geode, Jinmei Liao, Jared Stewart, Kevin Duling, and Ken 
> Howe.
> 
> 
> Bugs: GEODE-2481
> https://issues.apache.org/jira/browse/GEODE-2481
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> GEODE-2481: extract default properties generation to its own class
> 
> While refactoring GemFireVersion for GEODE-2474, I noticed that 
> GemFireVersionIntegrationJUnitTest has nothing to do with GemFireVersion.
> 
> Extract generation of default properties to DefaultPropertiesGenerator.
> 
> Rename GemFireVersionIntegrationJUnitTest to 
> DefaultPropertiesGeneratorIntegrationTest. Add tests to increase code 
> coverage.
> 
> Note: I have several changes on feature/GEODE-2474 branch which I'll separate 
> into multiple reviews. I'll do a final precheckin on the entire branch and 
> then merge the changes in after everything passes review and precheckin.
> 
> DistributionConfig and DefaultPropertiesGenerator should eventually move to a 
> configuration package, but I don't really want to combine anything that big 
> with this change.
> 
> 
> Diffs
> -
> 
>   geode-assembly/build.gradle f34688043dd3e6bf8e8bdf0cb223d533b692e301 
>   
> geode-core/src/main/java/org/apache/geode/distributed/internal/DefaultPropertiesGenerator.java
>  PRE-CREATION 
>   
> geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionConfigImpl.java
>  fa6d13f7cec40ae18f78da28b3b912e01be363aa 
>   
> geode-core/src/test/java/org/apache/geode/distributed/internal/DefaultPropertiesGeneratorIntegrationTest.java
>  PRE-CREATION 
>   
> geode-core/src/test/java/org/apache/geode/internal/GemFireVersionIntegrationJUnitTest.java
>  cae331325f17b470e6dd786d0f9a52bba7cb42a6 
> 
> Diff: https://reviews.apache.org/r/56635/diff/
> 
> 
> Testing
> ---
> 
> DefaultPropertiesGeneratorIntegrationTest
> GemFireVersionJUnitTest
> 
> 
> Thanks,
> 
> Kirk Lund
> 
>



Re: Review Request 56682: GEODE-2444: move Redis out of geode-core.

2017-02-15 Thread Galen O'Sullivan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56682/
---

(Updated Feb. 15, 2017, 5:53 p.m.)


Review request for geode, Bruce Schuchardt, Hitesh Khamesra, Udo Kohlmeyer, and 
Dan Smith.


Changes
---

* Fix data serializables by removing the old redis classes from core.


Repository: geode


Description
---

I'm taking over this ticket from Udo's: https://reviews.apache.org/r/56564/

* Move geode-redis to its own package.
* Make a `GeodeRedisService` interface that will get loaded by 
`GemFireCacheImpl`.
* Move functionality to `GeodeRedisServiceImpl`, keep the old 
`GeodeRedisServer` as a shell for backwards compatibility.
* Improve tests and make some fixes.


Diffs (updated)
-

  geode-core/build.gradle 8eba6d4e8 
  
geode-core/src/main/java/org/apache/geode/distributed/ConfigurationProperties.java
 63f650510 
  
geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionConfig.java
 c2a395de0 
  
geode-core/src/main/java/org/apache/geode/distributed/internal/DistributionConfigImpl.java
 fa6d13f7c 
  
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
6e374ecb7 
  
geode-core/src/main/java/org/apache/geode/internal/hll/CardinalityMergeException.java
 59ab0950e 
  geode-core/src/main/java/org/apache/geode/internal/hll/HyperLogLog.java 
4bdf81c77 
  geode-core/src/main/java/org/apache/geode/internal/hll/HyperLogLogPlus.java 
fc4b6e554 
  
geode-core/src/main/java/org/apache/geode/management/internal/cli/domain/FixedPartitionAttributesInfo.java
 eb0435a37 
  geode-core/src/main/java/org/apache/geode/redis/GeodeRedisServer.java 
4c97c98bf 
  geode-core/src/main/java/org/apache/geode/redis/GeodeRedisService.java 
PRE-CREATION 
  
geode-core/src/main/java/org/apache/geode/redis/internal/ByteArrayWrapper.java 
4a0ef5989 
  
geode-core/src/main/java/org/apache/geode/redis/internal/ByteToCommandDecoder.java
 124bf7512 
  geode-core/src/main/java/org/apache/geode/redis/internal/Coder.java  
  geode-core/src/main/java/org/apache/geode/redis/internal/Command.java  
  geode-core/src/main/java/org/apache/geode/redis/internal/DoubleWrapper.java 
60cd130da 
  
geode-core/src/main/java/org/apache/geode/redis/internal/ExecutionHandlerContext.java
 e2b49bedc 
  geode-core/src/main/java/org/apache/geode/redis/internal/Executor.java  
  geode-core/src/main/java/org/apache/geode/redis/internal/Extendable.java  
  
geode-core/src/main/java/org/apache/geode/redis/internal/RedisCommandParserException.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/RedisCommandType.java  
  geode-core/src/main/java/org/apache/geode/redis/internal/RedisConstants.java 
3c39c01c5 
  geode-core/src/main/java/org/apache/geode/redis/internal/RedisDataType.java 
63a15dff9 
  
geode-core/src/main/java/org/apache/geode/redis/internal/RedisDataTypeMismatchException.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/RegionCreationException.java
  
  geode-core/src/main/java/org/apache/geode/redis/internal/RegionProvider.java 
5994d7d8c 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/AbstractExecutor.java
 c9d47ab9b 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/AbstractScanExecutor.java
 0eb6dcad3 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/AuthExecutor.java
 9d318a450 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/DBSizeExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/DelExecutor.java
 e0db6518c 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/EchoExecutor.java
 407e65354 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/ExistsExecutor.java
 96611dc06 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/ExpirationExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/ExpireAtExecutor.java
 0962a7daa 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/ExpireExecutor.java
 d986826e7 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/FlushAllExecutor.java
 f8551665a 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/KeysExecutor.java
 9398d87e3 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/ListQuery.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/PExpireAtExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/PExpireExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/PTTLExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/PersistExecutor.java
 db4d19a88 
  
geode-core/src/main/java/org/apache/geode/redis/internal/executor/PingExecutor.java
  
  
geode-core/src/main/java/org/apache/geode/r

[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
GitHub user galen-pivotal opened a pull request:

https://github.com/apache/geode/pull/398

Split the redis adapter into its own package

Under this PR, the redis adapter is moved to its own source root, and 
registered as a service.

We're intending to make this a feature branch to improve the redis adapter.
@bschuchardt

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/galen-pivotal/incubator-geode 
feature/GEODE-2449

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #398


commit c6dbc6d4e4ea82d65074e30c4c15085a5e3d8688
Author: Udo Kohlmeyer 
Date:   2017-02-10T21:33:31Z

GEODE-2449: Moved Redis out of core with minimal Extension work added

commit 81e64a9a27c74ffcff78103b66cc9ca9ec4f7cf3
Author: Udo Kohlmeyer 
Date:   2017-02-10T21:43:28Z

GEODE-2449: spotless

commit 318b56a5e6a8cb443929dbe3d80fa5711777c2ef
Author: Udo Kohlmeyer 
Date:   2017-02-10T22:45:45Z

GEODE-2449: Moved Coder.java from core to redis module.
fix up code from code review

commit 5562547e065906c3c1815875f8693ad0f4be93d0
Author: Udo Kohlmeyer 
Date:   2017-02-10T23:21:27Z

GEODE-2449: Do a null check on the stopRedisServer

commit f79beb1e9d5ec08d25bb02f93eb96468b3253a72
Author: Galen O'Sullivan 
Date:   2017-02-13T18:53:05Z

GEODE-2449: changes in response to review.

* Move HyperLogLog back into geode-core.
* Bring back deprecated GeodeRedisServer for backwards compatibilty.

commit c2c5e07eb8ad7ae55a24f5d972678b78852e0a15
Author: Galen O'Sullivan 
Date:   2017-02-13T23:53:16Z

Merge branch 'develop' into feature/GEODE-2449

commit 452ac17c90a48d13342ec25aa768aaa1e8359867
Author: Galen O'Sullivan 
Date:   2017-02-14T00:29:13Z

Don't throw an error if Redis isn't supposed to start.

commit f990e7907cddad5522eea0b6919702489be7a49d
Author: Galen O'Sullivan 
Date:   2017-02-14T01:25:06Z

update doc comment links

commit a1116189eb25c2209e21685386d9acfcf6fbdb9e
Author: Galen O'Sullivan 
Date:   2017-02-14T07:48:01Z

GEODE-2449. Don't throw exception on redis port of zero

when Redis service is not found.

We would like to have the ability to tell the difference between
settings that are unset and those that are set to zero, but for the
moment we can't. So that's how it is. It may be that we'll have to not
allow starting a Redis host by setting a port number of zero in config,
or that it will just be less reliable.

commit 4ce902fce6759aed67bc1c321096326b7ce8bd60
Author: Galen OSullivan 
Date:   2017-02-14T17:22:03Z

Fix a log message that was causing tests to fail.

commit 2fbdb0cd8958051cb807a1674bf8344c92473802
Author: Galen OSullivan 
Date:   2017-02-14T22:30:56Z

GEODE-2449. Don't expect moved Redis classes in core.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2421) Create VS2015 AMI

2017-02-15 Thread Michael Martell (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868277#comment-15868277
 ] 

Michael Martell commented on GEODE-2421:


Since VS 2017 is needed by dodNetty, Ernie and I suggest changing this ticket 
to use VS 2017 instead of VS 2015. This version is slated to be released on Mar 
7, 2017.

I have been using the VS 2017 release candidates for about three months, and 
haven't seen any problems building and running our tests.

> Create VS2015 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2488) NetstatDUnitTest fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)
Kirk Lund created GEODE-2488:


 Summary: NetstatDUnitTest fails with OutOfMemoryError
 Key: GEODE-2488
 URL: https://issues.apache.org/jira/browse/GEODE-2488
 Project: Geode
  Issue Type: Bug
  Components: gfsh
Reporter: Kirk Lund


The JUnit controller JVM for NetstatDUnitTest fails with something like the 
follow stack. The cause appears to be one of the DUnit VMs running out of 
memory (see the OOME stack down further in this description.
{noformat}
org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
command due to GemFire error. Error occurred while executing netstat on 
[server-1]"]}] expected:<[OK]> but was:<[ERROR]>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
at 
org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.ru

[jira] [Updated] (GEODE-2488) NetstatDUnitTest fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-2488:
-
Labels: MiscellaneousCommands NetstatCommand netstat  (was: )

> NetstatDUnitTest fails with OutOfMemoryError
> 
>
> Key: GEODE-2488
> URL: https://issues.apache.org/jira/browse/GEODE-2488
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Reporter: Kirk Lund
>  Labels: MiscellaneousCommands, NetstatCommand, netstat
>
> The JUnit controller JVM for NetstatDUnitTest fails with something like the 
> follow stack. The cause appears to be one of the DUnit VMs running out of 
> memory (see the OOME stack down further in this description.
> {noformat}
> org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
> command due to GemFire error. Error occurred while executing netstat on 
> [server-1]"]}] expected:<[OK]> but was:<[ERROR]>
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
>   at 
> org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImp

[jira] [Updated] (GEODE-2488) NetstatDUnitTest fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-2488:
-
Component/s: management

> NetstatDUnitTest fails with OutOfMemoryError
> 
>
> Key: GEODE-2488
> URL: https://issues.apache.org/jira/browse/GEODE-2488
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Reporter: Kirk Lund
>  Labels: MiscellaneousCommands, NetstatCommand, netstat
>
> The JUnit controller JVM for NetstatDUnitTest fails with something like the 
> follow stack. The cause appears to be one of the DUnit VMs running out of 
> memory (see the OOME stack down further in this description.
> {noformat}
> org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
> command due to GemFire error. Error occurred while executing netstat on 
> [server-1]"]}] expected:<[OK]> but was:<[ERROR]>
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
>   at 
> org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.

[jira] [Commented] (GEODE-2485) CacheTransactionManager suspend/resume can leak memory for 30 minutes

2017-02-15 Thread Eric Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868312#comment-15868312
 ] 

Eric Shu commented on GEODE-2485:
-

Suspend and resume could be called from product.

For each create on a NORMAL or PRELOADED region in a transaction, product would 
suspend the transaction and try to get the remote version tag for an entry and 
then resume the transaction.

{noformat}
  /**
   * Fetch Version for the given key from a remote replicate member.
   *
   * @param key
   * @throws EntryNotFoundException if the entry is not found on replicate 
member
   * @return VersionTag for the key
   */
  protected VersionTag fetchRemoteVersionTag(Object key) {
VersionTag tag = null;
assert this.dataPolicy != DataPolicy.REPLICATE;
TransactionId txId = cache.getCacheTransactionManager().suspend();
try {
  boolean retry = true;
  InternalDistributedMember member = getRandomReplicate();
  while (retry) {
try {
  if (member == null) {
break;
  }
  FetchVersionResponse response = 
RemoteFetchVersionMessage.send(member, this, key);
  tag = response.waitForResponse();
  retry = false;
} catch (RemoteOperationException e) {
  member = getRandomReplicate();
  if (member != null) {
if (logger.isDebugEnabled()) {
  logger.debug("Retrying RemoteFetchVersionMessage on member:{}", 
member);
}
  }
}
  }
} finally {
  if (txId != null) {
cache.getCacheTransactionManager().resume(txId);
  }
}
return tag;
  }
{noformat}

> CacheTransactionManager suspend/resume can leak memory for 30 minutes
> -
>
> Key: GEODE-2485
> URL: https://issues.apache.org/jira/browse/GEODE-2485
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Darrel Schneider
>
> Each time you suspend/resume a transaction it leaves about 80 bytes of heap 
> allocated for 30 minutes. If you are doing a high rate of suspend/resume 
> calls then this could cause you to run out of memory in that 30 minute window.
> As a workaround you can set -Dgemfire.suspendedTxTimeout to a value as small 
> as 1 (which would cause the memory to be freed up after 1 minute instead of 
> 30 minutes).
> One fix for this is to periodically call cache.getCCPTimer().timerPurge() 
> after a certain number of resume calls have been done (for example 1000). 
> Currently resume is calling cancel on the TimerTask but that leaves the task 
> in the SystemTimer queue until it expires. Calling timerPurge it addition to 
> cancel will fix this bug. Calling timerPurge for every cancel may cause the 
> resume method to take too long and keep in mind the getCCPTimer is used by 
> other things so the size of the SystemTimer queue that is being purged will 
> not only be the number of suspended txs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2488) NetstatDUnitTest fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868317#comment-15868317
 ] 

Kirk Lund commented on GEODE-2488:
--

NetstatFunction reads in every line from the netstat output and lsof output as 
a String and then appends it to a StringBuilder. When it completes, it invokes 
toString which throws "OutOfMemoryError: Java heap space." 

After that it copies the String into a byte array which is then passed to 
CliUtil.compressBytes. This creates one more giant byte array containing the 
entire netstat output in compressed form. It's this last byte array which then 
gets sent to the Locator on a socket.

If the initial toString() doesn't run out of memory, getBytes() or 
compressBytes() could both push the JVM out of memory as well. I think the fix 
might require streaming the results back one readLine() at a time instead of 
trying to build up a giant String and two giant byte[] arrays of the entire 
output in memory.


> NetstatDUnitTest fails with OutOfMemoryError
> 
>
> Key: GEODE-2488
> URL: https://issues.apache.org/jira/browse/GEODE-2488
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Reporter: Kirk Lund
>  Labels: MiscellaneousCommands, NetstatCommand, netstat
>
> The JUnit controller JVM for NetstatDUnitTest fails with something like the 
> follow stack. The cause appears to be one of the DUnit VMs running out of 
> memory (see the OOME stack down further in this description.
> {noformat}
> org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
> command due to GemFire error. Error occurred while executing netstat on 
> [server-1]"]}] expected:<[OK]> but was:<[ERROR]>
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
>   at 
> org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav

[jira] [Comment Edited] (GEODE-2488) NetstatDUnitTest fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868317#comment-15868317
 ] 

Kirk Lund edited comment on GEODE-2488 at 2/15/17 6:37 PM:
---

NetstatFunction reads in every line from the netstat output and lsof output as 
a String and then appends it to a StringBuilder. When it completes, it invokes 
toString which throws "OutOfMemoryError: Java heap space." 

After that it would copy the String into a byte array which is then passed to 
CliUtil.compressBytes. This creates one more giant byte array containing the 
entire netstat output in compressed form. It's this last byte array which then 
gets sent to the Locator on a socket.

If the initial toString() doesn't run out of memory, getBytes() or 
compressBytes() could both push the JVM out of memory as well. I think the fix 
might require streaming the results back one readLine() at a time instead of 
trying to build up a giant String and two giant byte[] arrays of the entire 
output in memory.



was (Author: klund):
NetstatFunction reads in every line from the netstat output and lsof output as 
a String and then appends it to a StringBuilder. When it completes, it invokes 
toString which throws "OutOfMemoryError: Java heap space." 

After that it copies the String into a byte array which is then passed to 
CliUtil.compressBytes. This creates one more giant byte array containing the 
entire netstat output in compressed form. It's this last byte array which then 
gets sent to the Locator on a socket.

If the initial toString() doesn't run out of memory, getBytes() or 
compressBytes() could both push the JVM out of memory as well. I think the fix 
might require streaming the results back one readLine() at a time instead of 
trying to build up a giant String and two giant byte[] arrays of the entire 
output in memory.


> NetstatDUnitTest fails with OutOfMemoryError
> 
>
> Key: GEODE-2488
> URL: https://issues.apache.org/jira/browse/GEODE-2488
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Reporter: Kirk Lund
>  Labels: MiscellaneousCommands, NetstatCommand, netstat
>
> The JUnit controller JVM for NetstatDUnitTest fails with something like the 
> follow stack. The cause appears to be one of the DUnit VMs running out of 
> memory (see the OOME stack down further in this description.
> {noformat}
> org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
> command due to GemFire error. Error occurred while executing netstat on 
> [server-1]"]}] expected:<[OK]> but was:<[ERROR]>
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
>   at 
> org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.ja

[jira] [Updated] (GEODE-2421) Create VS2017 AMI

2017-02-15 Thread Ernest Burghardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ernest Burghardt updated GEODE-2421:

Description: VS2017 is due to RTM in early March 2017, it would be nice to 
have a dev AMI for the new tool chain.
Summary: Create VS2017 AMI  (was: Create VS2015 AMI)

> Create VS2017 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>
> VS2017 is due to RTM in early March 2017, it would be nice to have a dev AMI 
> for the new tool chain.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode-native pull request #11: GEODE-2486: Initialize OpenSSL for DEFAULT ci...

2017-02-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/geode-native/pull/11


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (GEODE-2421) Create VS2017 AMI

2017-02-15 Thread Michael Martell (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868277#comment-15868277
 ] 

Michael Martell edited comment on GEODE-2421 at 2/15/17 6:45 PM:
-

Suggest changing this ticket to use VS 2017 instead of VS 2015. This version is 
slated to be released on Mar 7, 2017.

I have been using the VS 2017 release candidates for about three months, and 
haven't seen any problems building and running our tests.


was (Author: mmartell):
Since VS 2017 is needed by dodNetty, Ernie and I suggest changing this ticket 
to use VS 2017 instead of VS 2015. This version is slated to be released on Mar 
7, 2017.

I have been using the VS 2017 release candidates for about three months, and 
haven't seen any problems building and running our tests.

> Create VS2017 AMI
> -
>
> Key: GEODE-2421
> URL: https://issues.apache.org/jira/browse/GEODE-2421
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Ernest Burghardt
>
> VS2017 is due to RTM in early March 2017, it would be nice to have a dev AMI 
> for the new tool chain.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2486) SSL ciphers other than NULL not supported

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868328#comment-15868328
 ] 

ASF GitHub Bot commented on GEODE-2486:
---

Github user asfgit closed the pull request at:

https://github.com/apache/geode-native/pull/11


> SSL ciphers other than NULL not supported
> -
>
> Key: GEODE-2486
> URL: https://issues.apache.org/jira/browse/GEODE-2486
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob S. Barrett
>
> SSLImpl does not correctly initialize the OpenSSL library so ciphers other 
> than the NULL cipher can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2486) SSL ciphers other than NULL not supported

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868327#comment-15868327
 ] 

ASF subversion and git services commented on GEODE-2486:


Commit ad8b5a83bac8d972d7ad46bbee201b056c1d436d in geode-native's branch 
refs/heads/develop from Jacob Barrett
[ https://git-wip-us.apache.org/repos/asf?p=geode-native.git;h=ad8b5a8 ]

GEODE-2486: Initialize OpenSSL for DEFAULT cipher support.

- Init SSLv23_client mode to support negotiation of all SSL/TLS
  versions.
- Cleanup C++ style issues.
- Update tests to use NON-NULL cipher.

> SSL ciphers other than NULL not supported
> -
>
> Key: GEODE-2486
> URL: https://issues.apache.org/jira/browse/GEODE-2486
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob S. Barrett
>
> SSLImpl does not correctly initialize the OpenSSL library so ciphers other 
> than the NULL cipher can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread scmbuildguy
Github user scmbuildguy commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101350393
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/redis/GeodeRedisService.java ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.redis;
+
+import org.apache.geode.internal.cache.CacheService;
+
+/**
+ * Created by ukohlmeyer on 2/9/17.
--- End diff --

Remove 'author' comment


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread scmbuildguy
Github user scmbuildguy commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101351743
  
--- Diff: 
geode-redis/src/test/java/org/apache/geode/redis/AuthJUnitTest.java ---
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.redis;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+import org.apache.geode.distributed.ConfigurationProperties;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import org.apache.geode.internal.cache.GemFireCacheImpl;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.util.Properties;
+import redis.clients.jedis.Jedis;
+import redis.clients.jedis.exceptions.JedisConnectionException;
+import redis.clients.jedis.exceptions.JedisDataException;
+
+@Category(IntegrationTest.class)
+public class AuthJUnitTest extends RedisTestBase {
+
+  private static final String PASSWORD = "pwd";
+
+  private void setupCacheWithPassword() {
+if (cache != null) {
+  cache.close();
+}
+Properties redisCacheProperties = getDefaultRedisCacheProperties();
+
redisCacheProperties.setProperty(ConfigurationProperties.REDIS_PASSWORD, 
PASSWORD);
+cache = (GemFireCacheImpl) createCacheInstance(redisCacheProperties);
+  }
+
+  @Test
+  public void testAuthConfig() {
+setupCacheWithPassword();
+InternalDistributedSystem distributedSystem = 
cache.getDistributedSystem();
+assert 
(distributedSystem.getConfig().getRedisPassword().equals(PASSWORD));
+cache.close();
+  }
+
+  @Test
+  public void testAuthRejectAccept() {
+setupCacheWithPassword();
+try (Jedis jedis = defaultJedisInstance()) {
+  Exception ex = null;
+  try {
+jedis.auth("wrongpwd");
+  } catch (JedisDataException e) {
+ex = e;
+  }
+  assertNotNull(ex);
+
+  String res = jedis.auth(PASSWORD);
+  assertEquals(res, "OK");
+}
+  }
+
+  @Test
+  public void testAuthNoPwd() {
+try (Jedis jedis = defaultJedisInstance()) {
+  jedis.auth(PASSWORD);
+  fail(
+  "We expecting either a JedisConnectionException or 
JedisDataException to be thrown here");
--- End diff --

Minor grammar issue, seems to be missing 'are'. 

We are expecting...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode-native pull request #3: Replace ace calls to standard functions

2017-02-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/geode-native/pull/3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2484) Remove ACE from native client dependencies

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868337#comment-15868337
 ] 

ASF subversion and git services commented on GEODE-2484:


Commit d4e0a8447b1e54d5c34a35069bc24a30d8258e2e in geode-native's branch 
refs/heads/develop from [~dkimura]
[ https://git-wip-us.apache.org/repos/asf?p=geode-native.git;h=d4e0a84 ]

GEODE-2484 Replace ace calls to standard functions

- Fix compile errors related to including headers
- Fix formatting and c-style cast warnings

This closes #3.


> Remove ACE from native client dependencies
> --
>
> Key: GEODE-2484
> URL: https://issues.apache.org/jira/browse/GEODE-2484
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: David Kimura
>
> Remove ACE from native client dependencies.
> Replace ACE usage with C++11 and/or Boost 1.63+



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2485) CacheTransactionManager suspend/resume can leak memory for 30 minutes

2017-02-15 Thread Eric Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868344#comment-15868344
 ] 

Eric Shu commented on GEODE-2485:
-

Stack trace for the above scenario:
{noformat}
at 
org.apache.geode.internal.cache.TXManagerImpl.suspend(TXManagerImpl.java:1225)
at 
org.apache.geode.internal.cache.DistributedRegion.fetchRemoteVersionTag(DistributedRegion.java:4004)
at 
org.apache.geode.internal.cache.TXEntryState.fetchRemoteVersionTag(TXEntryState.java:1037)
at 
org.apache.geode.internal.cache.TXEntryState.basicPut(TXEntryState.java:1019)
at org.apache.geode.internal.cache.TXState.txPutEntry(TXState.java:1288)
at org.apache.geode.internal.cache.TXState.putEntry(TXState.java:1615)
at 
org.apache.geode.internal.cache.TXStateProxyImpl.putEntry(TXStateProxyImpl.java:810)
at 
org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5194)
at 
org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1605)
at 
org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1592)
at 
org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:277)
{noformat}

> CacheTransactionManager suspend/resume can leak memory for 30 minutes
> -
>
> Key: GEODE-2485
> URL: https://issues.apache.org/jira/browse/GEODE-2485
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Darrel Schneider
>
> Each time you suspend/resume a transaction it leaves about 80 bytes of heap 
> allocated for 30 minutes. If you are doing a high rate of suspend/resume 
> calls then this could cause you to run out of memory in that 30 minute window.
> As a workaround you can set -Dgemfire.suspendedTxTimeout to a value as small 
> as 1 (which would cause the memory to be freed up after 1 minute instead of 
> 30 minutes).
> One fix for this is to periodically call cache.getCCPTimer().timerPurge() 
> after a certain number of resume calls have been done (for example 1000). 
> Currently resume is calling cancel on the TimerTask but that leaves the task 
> in the SystemTimer queue until it expires. Calling timerPurge it addition to 
> cancel will fix this bug. Calling timerPurge for every cancel may cause the 
> resume method to take too long and keep in mind the getCCPTimer is used by 
> other things so the size of the SystemTimer queue that is being purged will 
> not only be the number of suspended txs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101354013
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/redis/GeodeRedisService.java ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.redis;
+
+import org.apache.geode.internal.cache.CacheService;
+
+/**
+ * Created by ukohlmeyer on 2/9/17.
--- End diff --

fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2475) Upgrade lucene version to 6.4.1

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868347#comment-15868347
 ] 

ASF subversion and git services commented on GEODE-2475:


Commit 5d98a8c9873271e10b604cd9066f6d50bd881172 in geode's branch 
refs/heads/feature/GEODE-2449 from [~huynhja]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=5d98a8c ]

GEODE-2475: Upgrade Lucene version to 6.4.1


> Upgrade lucene version to 6.4.1 
> 
>
> Key: GEODE-2475
> URL: https://issues.apache.org/jira/browse/GEODE-2475
> Project: Geode
>  Issue Type: Task
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
> Fix For: 1.2.0
>
>
> We should probably keep geode up to date with the latest Lucene jars



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2398) Sporadic Oplog corruption due to channel.write failure

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868348#comment-15868348
 ] 

ASF subversion and git services commented on GEODE-2398:


Commit 9b0f16570aad4abc82b71d0d16167a9774449d41 in geode's branch 
refs/heads/feature/GEODE-2449 from [~khowe]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=9b0f165 ]

GEODE-2398: Retry oplog channel.write on silent failures

Implemented limited retries in two forms of Oplog.flush() when channel.write() 
is called.
If write() returns bytes witten less than the change in the ByteBuffer 
positions, then reset
buffer positions and re-try writing for a liomited number of times. Throws
IOException if the write doesn't succeeded after a few retries (max
number of retries is defined by a static).

Added new unit tests.


> Sporadic Oplog corruption due to channel.write failure
> --
>
> Key: GEODE-2398
> URL: https://issues.apache.org/jira/browse/GEODE-2398
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Kenneth Howe
>Assignee: Kenneth Howe
> Fix For: 1.2.0
>
>
> There have been some occurrences of Oplog corruption during testing that have 
> been traced to failures in writing oplog entries to the .crf file. When it 
> fails, Oplog.flush attempts to write a ByteBuffer to the file channel. The 
> call to channel.write(bb) method returns 0 bytes written, but the source 
> ByteBuffer position is moved to the ByteBuffer limit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2449) Move redis adapter to extension framework

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868351#comment-15868351
 ] 

ASF subversion and git services commented on GEODE-2449:


Commit f79beb1e9d5ec08d25bb02f93eb96468b3253a72 in geode's branch 
refs/heads/feature/GEODE-2449 from [~gosullivan]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=f79beb1 ]

GEODE-2449: changes in response to review.

* Move HyperLogLog back into geode-core.
* Bring back deprecated GeodeRedisServer for backwards compatibilty.


> Move redis adapter to extension framework
> -
>
> Key: GEODE-2449
> URL: https://issues.apache.org/jira/browse/GEODE-2449
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Addison
>Assignee: Udo Kohlmeyer
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Mark Bretl
Hi Galen,

I am able to the console log [1], test report summary [2], and artifacts
[3] without logging in. Which 'build report' are you trying to view?

--Mark

[1] https://builds.apache.org/job/Geode-nightly/749/console
[2] https://builds.apache.org/job/Geode-nightly/749/testReport/
[3] https://builds.apache.org/job/Geode-nightly/749/artifact/

On Wed, Feb 15, 2017 at 9:04 AM, Galen M O'Sullivan 
wrote:

> I don't seem to have access to see the build report. Is that restricted to
> committers only?
>
> Thanks,
> Galen
>
> On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
> > See 
> >
> > Changes:
> >
> > [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
> >
> > [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
> >
> > [jiliao] refactor ServerStarterRule and LocatorStarterRule so that they
> > can be
> >
> > [gzhou] GEODE-2471: fix the race condition in test code.
> >
> > --
> > [...truncated 713 lines...]
> > :geode-cq:build
> > :geode-cq:distributedTest
> > :geode-cq:flakyTest
> > :geode-cq:integrationTest
> > :geode-json:assemble
> > :geode-json:compileTestJava UP-TO-DATE
> > :geode-json:processTestResources UP-TO-DATE
> > :geode-json:testClasses UP-TO-DATE
> > :geode-json:checkMissedTests UP-TO-DATE
> > :geode-json:spotlessJavaCheck
> > :geode-json:spotlessCheck
> > :geode-json:test UP-TO-DATE
> > :geode-json:check
> > :geode-json:build
> > :geode-json:distributedTest UP-TO-DATE
> > :geode-json:flakyTest UP-TO-DATE
> > :geode-json:integrationTest UP-TO-DATE
> > :geode-junit:javadoc
> > :geode-junit:javadocJar
> > :geode-junit:sourcesJar
> > :geode-junit:signArchives SKIPPED
> > :geode-junit:assemble
> > :geode-junit:compileTestJava
> > :geode-junit:processTestResources UP-TO-DATE
> > :geode-junit:testClasses
> > :geode-junit:checkMissedTests
> > :geode-junit:spotlessJavaCheck
> > :geode-junit:spotlessCheck
> > :geode-junit:test
> > :geode-junit:check
> > :geode-junit:build
> > :geode-junit:distributedTest
> > :geode-junit:flakyTest
> > :geode-junit:integrationTest
> > :geode-lucene:assemble
> > :geode-lucene:compileTestJava
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-runner/2.4.
> > 0/randomizedtesting-runner-2.4.0.pom
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-parent/2.4.
> > 0/randomizedtesting-parent-2.4.0.pom
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-runner/2.4.
> > 0/randomizedtesting-runner-2.4.0.jar
> > Note: Some input files use or override a deprecated API.
> > Note: Recompile with -Xlint:deprecation for details.
> > Note: Some input files use unchecked or unsafe operations.
> > Note: Recompile with -Xlint:unchecked for details.
> > :geode-lucene:processTestResources
> > :geode-lucene:testClasses
> > :geode-lucene:checkMissedTests
> > :geode-lucene:spotlessJavaCheck
> > :geode-lucene:spotlessCheck
> > :geode-lucene:test
> > :geode-lucene:check
> > :geode-lucene:build
> > :geode-lucene:distributedTest
> > :geode-lucene:flakyTest
> > :geode-lucene:integrationTest
> > :geode-old-client-support:assemble
> > :geode-old-client-support:compileTestJava
> > :geode-old-client-support:processTestResources UP-TO-DATE
> > :geode-old-client-support:testClasses
> > :geode-old-client-support:checkMissedTests
> > :geode-old-client-support:spotlessJavaCheck
> > :geode-old-client-support:spotlessCheck
> > :geode-old-client-support:test
> > :geode-old-client-support:check
> > :geode-old-client-support:build
> > :geode-old-client-support:distributedTest
> > :geode-old-client-support:flakyTest
> > :geode-old-client-support:integrationTest
> > :geode-old-versions:javadoc UP-TO-DATE
> > :geode-old-versions:javadocJar
> > :geode-old-versions:sourcesJar
> > :geode-old-versions:signArchives SKIPPED
> > :geode-old-versions:assemble
> > :geode-old-versions:compileTestJava UP-TO-DATE
> > :geode-old-versions:processTestResources UP-TO-DATE
> > :geode-old-versions:testClasses UP-TO-DATE
> > :geode-old-versions:checkMissedTests UP-TO-DATE
> > :geode-old-versions:spotlessJavaCheck
> > :geode-old-versions:spotlessCheck
> > :geode-old-versions:test UP-TO-DATE
> > :geode-old-versions:check
> > :geode-old-versions:build
> > :geode-old-versions:distributedTest UP-TO-DATE
> > :geode-old-versions:flakyTest UP-TO-DATE
> > :geode-old-ve

[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101353985
  
--- Diff: 
geode-redis/src/test/java/org/apache/geode/redis/AuthJUnitTest.java ---
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.redis;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+import org.apache.geode.distributed.ConfigurationProperties;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import org.apache.geode.internal.cache.GemFireCacheImpl;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.util.Properties;
+import redis.clients.jedis.Jedis;
+import redis.clients.jedis.exceptions.JedisConnectionException;
+import redis.clients.jedis.exceptions.JedisDataException;
+
+@Category(IntegrationTest.class)
+public class AuthJUnitTest extends RedisTestBase {
+
+  private static final String PASSWORD = "pwd";
+
+  private void setupCacheWithPassword() {
+if (cache != null) {
+  cache.close();
+}
+Properties redisCacheProperties = getDefaultRedisCacheProperties();
+
redisCacheProperties.setProperty(ConfigurationProperties.REDIS_PASSWORD, 
PASSWORD);
+cache = (GemFireCacheImpl) createCacheInstance(redisCacheProperties);
+  }
+
+  @Test
+  public void testAuthConfig() {
+setupCacheWithPassword();
+InternalDistributedSystem distributedSystem = 
cache.getDistributedSystem();
+assert 
(distributedSystem.getConfig().getRedisPassword().equals(PASSWORD));
+cache.close();
+  }
+
+  @Test
+  public void testAuthRejectAccept() {
+setupCacheWithPassword();
+try (Jedis jedis = defaultJedisInstance()) {
+  Exception ex = null;
+  try {
+jedis.auth("wrongpwd");
+  } catch (JedisDataException e) {
+ex = e;
+  }
+  assertNotNull(ex);
+
+  String res = jedis.auth(PASSWORD);
+  assertEquals(res, "OK");
+}
+  }
+
+  @Test
+  public void testAuthNoPwd() {
+try (Jedis jedis = defaultJedisInstance()) {
+  jedis.auth(PASSWORD);
+  fail(
+  "We expecting either a JedisConnectionException or 
JedisDataException to be thrown here");
--- End diff --

fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101354076
  
--- Diff: 
geode-redis/src/test/java/org/apache/geode/redis/AuthJUnitTest.java ---
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for 
additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express
+ * or implied. See the License for the specific language governing 
permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.redis;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+import org.apache.geode.distributed.ConfigurationProperties;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import org.apache.geode.internal.cache.GemFireCacheImpl;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.util.Properties;
+import redis.clients.jedis.Jedis;
+import redis.clients.jedis.exceptions.JedisConnectionException;
+import redis.clients.jedis.exceptions.JedisDataException;
+
+@Category(IntegrationTest.class)
+public class AuthJUnitTest extends RedisTestBase {
+
+  private static final String PASSWORD = "pwd";
+
+  private void setupCacheWithPassword() {
+if (cache != null) {
+  cache.close();
+}
+Properties redisCacheProperties = getDefaultRedisCacheProperties();
+
redisCacheProperties.setProperty(ConfigurationProperties.REDIS_PASSWORD, 
PASSWORD);
+cache = (GemFireCacheImpl) createCacheInstance(redisCacheProperties);
+  }
+
+  @Test
+  public void testAuthConfig() {
+setupCacheWithPassword();
+InternalDistributedSystem distributedSystem = 
cache.getDistributedSystem();
+assert 
(distributedSystem.getConfig().getRedisPassword().equals(PASSWORD));
+cache.close();
+  }
+
+  @Test
+  public void testAuthRejectAccept() {
+setupCacheWithPassword();
+try (Jedis jedis = defaultJedisInstance()) {
+  Exception ex = null;
+  try {
+jedis.auth("wrongpwd");
+  } catch (JedisDataException e) {
+ex = e;
+  }
+  assertNotNull(ex);
+
+  String res = jedis.auth(PASSWORD);
+  assertEquals(res, "OK");
+}
+  }
+
+  @Test
+  public void testAuthNoPwd() {
+try (Jedis jedis = defaultJedisInstance()) {
+  jedis.auth(PASSWORD);
+  fail(
+  "We expecting either a JedisConnectionException or 
JedisDataException to be thrown here");
--- End diff --

I made the comment more informative to boot.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Mark Bretl
Hi Galen,

I am able to the console log [1], test report summary [2], and artifacts
[3] without logging in. Which 'build report' are you trying to view?

--Mark

[1] https://builds.apache.org/job/Geode-nightly/749/console
[2] https://builds.apache.org/job/Geode-nightly/749/testReport/
[3] https://builds.apache.org/job/Geode-nightly/749/artifact/

On Wed, Feb 15, 2017 at 9:04 AM, Galen M O'Sullivan 
wrote:

> I don't seem to have access to see the build report. Is that restricted to
> committers only?
>
> Thanks,
> Galen
>
> On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
> > See 
> >
> > Changes:
> >
> > [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
> >
> > [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
> >
> > [jiliao] refactor ServerStarterRule and LocatorStarterRule so that they
> > can be
> >
> > [gzhou] GEODE-2471: fix the race condition in test code.
> >
> > --
> > [...truncated 713 lines...]
> > :geode-cq:build
> > :geode-cq:distributedTest
> > :geode-cq:flakyTest
> > :geode-cq:integrationTest
> > :geode-json:assemble
> > :geode-json:compileTestJava UP-TO-DATE
> > :geode-json:processTestResources UP-TO-DATE
> > :geode-json:testClasses UP-TO-DATE
> > :geode-json:checkMissedTests UP-TO-DATE
> > :geode-json:spotlessJavaCheck
> > :geode-json:spotlessCheck
> > :geode-json:test UP-TO-DATE
> > :geode-json:check
> > :geode-json:build
> > :geode-json:distributedTest UP-TO-DATE
> > :geode-json:flakyTest UP-TO-DATE
> > :geode-json:integrationTest UP-TO-DATE
> > :geode-junit:javadoc
> > :geode-junit:javadocJar
> > :geode-junit:sourcesJar
> > :geode-junit:signArchives SKIPPED
> > :geode-junit:assemble
> > :geode-junit:compileTestJava
> > :geode-junit:processTestResources UP-TO-DATE
> > :geode-junit:testClasses
> > :geode-junit:checkMissedTests
> > :geode-junit:spotlessJavaCheck
> > :geode-junit:spotlessCheck
> > :geode-junit:test
> > :geode-junit:check
> > :geode-junit:build
> > :geode-junit:distributedTest
> > :geode-junit:flakyTest
> > :geode-junit:integrationTest
> > :geode-lucene:assemble
> > :geode-lucene:compileTestJava
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-runner/2.4.
> > 0/randomizedtesting-runner-2.4.0.pom
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-parent/2.4.
> > 0/randomizedtesting-parent-2.4.0.pom
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > randomizedtesting/randomizedtesting-runner/2.4.
> > 0/randomizedtesting-runner-2.4.0.jar
> > Note: Some input files use or override a deprecated API.
> > Note: Recompile with -Xlint:deprecation for details.
> > Note: Some input files use unchecked or unsafe operations.
> > Note: Recompile with -Xlint:unchecked for details.
> > :geode-lucene:processTestResources
> > :geode-lucene:testClasses
> > :geode-lucene:checkMissedTests
> > :geode-lucene:spotlessJavaCheck
> > :geode-lucene:spotlessCheck
> > :geode-lucene:test
> > :geode-lucene:check
> > :geode-lucene:build
> > :geode-lucene:distributedTest
> > :geode-lucene:flakyTest
> > :geode-lucene:integrationTest
> > :geode-old-client-support:assemble
> > :geode-old-client-support:compileTestJava
> > :geode-old-client-support:processTestResources UP-TO-DATE
> > :geode-old-client-support:testClasses
> > :geode-old-client-support:checkMissedTests
> > :geode-old-client-support:spotlessJavaCheck
> > :geode-old-client-support:spotlessCheck
> > :geode-old-client-support:test
> > :geode-old-client-support:check
> > :geode-old-client-support:build
> > :geode-old-client-support:distributedTest
> > :geode-old-client-support:flakyTest
> > :geode-old-client-support:integrationTest
> > :geode-old-versions:javadoc UP-TO-DATE
> > :geode-old-versions:javadocJar
> > :geode-old-versions:sourcesJar
> > :geode-old-versions:signArchives SKIPPED
> > :geode-old-versions:assemble
> > :geode-old-versions:compileTestJava UP-TO-DATE
> > :geode-old-versions:processTestResources UP-TO-DATE
> > :geode-old-versions:testClasses UP-TO-DATE
> > :geode-old-versions:checkMissedTests UP-TO-DATE
> > :geode-old-versions:spotlessJavaCheck
> > :geode-old-versions:spotlessCheck
> > :geode-old-versions:test UP-TO-DATE
> > :geode-old-versions:check
> > :geode-old-versions:build
> > :geode-old-versions:distributedTest UP-TO-DATE
> > :geode-old-versions:flakyTest UP-TO-DATE
> > :geode-old-ve

[jira] [Commented] (GEODE-2398) Sporadic Oplog corruption due to channel.write failure

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868350#comment-15868350
 ] 

ASF subversion and git services commented on GEODE-2398:


Commit fb14e9aab263654ed0176dcc3c9738be1b208a82 in geode's branch 
refs/heads/feature/GEODE-2449 from [~khowe]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=fb14e9a ]

GEODE-2398: Updates from review

https://reviews.apache.org/r/56506/


> Sporadic Oplog corruption due to channel.write failure
> --
>
> Key: GEODE-2398
> URL: https://issues.apache.org/jira/browse/GEODE-2398
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Kenneth Howe
>Assignee: Kenneth Howe
> Fix For: 1.2.0
>
>
> There have been some occurrences of Oplog corruption during testing that have 
> been traced to failures in writing oplog entries to the .crf file. When it 
> fails, Oplog.flush attempts to write a ByteBuffer to the file channel. The 
> call to channel.write(bb) method returns 0 bytes written, but the source 
> ByteBuffer position is moved to the ByteBuffer limit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2449) Move redis adapter to extension framework

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868352#comment-15868352
 ] 

ASF subversion and git services commented on GEODE-2449:


Commit a1116189eb25c2209e21685386d9acfcf6fbdb9e in geode's branch 
refs/heads/feature/GEODE-2449 from [~gosullivan]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=a111618 ]

GEODE-2449. Don't throw exception on redis port of zero

when Redis service is not found.

We would like to have the ability to tell the difference between
settings that are unset and those that are set to zero, but for the
moment we can't. So that's how it is. It may be that we'll have to not
allow starting a Redis host by setting a port number of zero in config,
or that it will just be less reliable.


> Move redis adapter to extension framework
> -
>
> Key: GEODE-2449
> URL: https://issues.apache.org/jira/browse/GEODE-2449
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Addison
>Assignee: Udo Kohlmeyer
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2449) Move redis adapter to extension framework

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868353#comment-15868353
 ] 

ASF subversion and git services commented on GEODE-2449:


Commit 2fbdb0cd8958051cb807a1674bf8344c92473802 in geode's branch 
refs/heads/feature/GEODE-2449 from [~gosullivan]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=2fbdb0c ]

GEODE-2449. Don't expect moved Redis classes in core.


> Move redis adapter to extension framework
> -
>
> Key: GEODE-2449
> URL: https://issues.apache.org/jira/browse/GEODE-2449
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Addison
>Assignee: Udo Kohlmeyer
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2488) Netstat command fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-2488:
-
Summary: Netstat command fails with OutOfMemoryError  (was: 
NetstatDUnitTest fails with OutOfMemoryError)

> Netstat command fails with OutOfMemoryError
> ---
>
> Key: GEODE-2488
> URL: https://issues.apache.org/jira/browse/GEODE-2488
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Reporter: Kirk Lund
>  Labels: MiscellaneousCommands, NetstatCommand, netstat
>
> The JUnit controller JVM for NetstatDUnitTest fails with something like the 
> follow stack. The cause appears to be one of the DUnit VMs running out of 
> memory (see the OOME stack down further in this description.
> {noformat}
> org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
> command due to GemFire error. Error occurred while executing netstat on 
> [server-1]"]}] expected:<[OK]> but was:<[ERROR]>
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
>   at 
> org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at

[jira] [Assigned] (GEODE-2469) Redis adapter Hash key support

2017-02-15 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra reassigned GEODE-2469:
--

Assignee: Hitesh Khamesra

> Redis adapter Hash key support
> --
>
> Key: GEODE-2469
> URL: https://issues.apache.org/jira/browse/GEODE-2469
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Gregory Green
>Assignee: Hitesh Khamesra
>
> The Redis adapter does not appear to handle hash keys correctly.
> The following Example: Redis CLI works.
> localhost:11211>  HSET companies name "John Smith"
> Using a  HSET :id  .. produces an error
> Example:
> localhost:11211>  HSET companies:1000 name "John Smith"
> [Server error]
> [fine 2017/02/10 16:04:33.289 EST server1  
> tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
> underscores: companies: 1000
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: companies: 1000
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> //Example Spring Data Redis Object sample
> @Data
> @EqualsAndHashCode()
> @RedisHash(value="companies")
> @NoArgsConstructor
> public class Company
> {
>   private @Id String id;
>
> //Repository
> public interface CompanyRepository extends CrudRepository 
> {
>  
> }
> //When saving using a repository
> repository.save(this.myCompany);
> [Same Server error]
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: 
> companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2488) Netstat command fails with OutOfMemoryError

2017-02-15 Thread Kirk Lund (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-2488:
-
Description: 
Note: this can occur outside of dunit tests as well. Just using the gfsh 
netstat command on locator or server with too little heap space will hit this. 
See the 1st comment about not streaming the netstat output -- the entire output 
is held in memory in a string and two byte arrays before sending the result 
back from NetstatFunction.

The JUnit controller JVM for NetstatDUnitTest fails with something like the 
follow stack. The cause appears to be one of the DUnit VMs running out of 
memory (see the OOME stack down further in this description.
{noformat}
org.junit.ComparisonFailure: [{"errorCode":505,"message":["Could not process 
command due to GemFire error. Error occurred while executing netstat on 
[server-1]"]}] expected:<[OK]> but was:<[ERROR]>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
org.apache.geode.test.dunit.rules.GfshShellConnectionRule.executeAndVerifyCommand(GfshShellConnectionRule.java:163)
at 
org.apache.geode.management.internal.cli.NetstatDUnitTest.testConnectToJmxManagerOne(NetstatDUnitTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:37)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(Re

Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Galen M O'Sullivan
Hi Mark,

I'm trying to view the Geode build report at this link:
> There were failing tests. See the report at: file://<
https://builds.apache.org/job/Geode-nightly/ws/geode-core/build/reports/
distributedTest/index.html>

I don't see the geode-core/build/reports directory here:
https://builds.apache.org/job/Geode-nightly/749/artifact/

Thanks,
Galen

On Wed, Feb 15, 2017 at 10:54 AM, Mark Bretl  wrote:

> Hi Galen,
>
> I am able to the console log [1], test report summary [2], and artifacts
> [3] without logging in. Which 'build report' are you trying to view?
>
> --Mark
>
> [1] https://builds.apache.org/job/Geode-nightly/749/console
> [2] https://builds.apache.org/job/Geode-nightly/749/testReport/
> [3] https://builds.apache.org/job/Geode-nightly/749/artifact/
>
> On Wed, Feb 15, 2017 at 9:04 AM, Galen M O'Sullivan  >
> wrote:
>
> > I don't seem to have access to see the build report. Is that restricted
> to
> > committers only?
> >
> > Thanks,
> > Galen
> >
> > On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> > > See 
> > >
> > > Changes:
> > >
> > > [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
> > >
> > > [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
> > >
> > > [jiliao] refactor ServerStarterRule and LocatorStarterRule so that they
> > > can be
> > >
> > > [gzhou] GEODE-2471: fix the race condition in test code.
> > >
> > > --
> > > [...truncated 713 lines...]
> > > :geode-cq:build
> > > :geode-cq:distributedTest
> > > :geode-cq:flakyTest
> > > :geode-cq:integrationTest
> > > :geode-json:assemble
> > > :geode-json:compileTestJava UP-TO-DATE
> > > :geode-json:processTestResources UP-TO-DATE
> > > :geode-json:testClasses UP-TO-DATE
> > > :geode-json:checkMissedTests UP-TO-DATE
> > > :geode-json:spotlessJavaCheck
> > > :geode-json:spotlessCheck
> > > :geode-json:test UP-TO-DATE
> > > :geode-json:check
> > > :geode-json:build
> > > :geode-json:distributedTest UP-TO-DATE
> > > :geode-json:flakyTest UP-TO-DATE
> > > :geode-json:integrationTest UP-TO-DATE
> > > :geode-junit:javadoc
> > > :geode-junit:javadocJar
> > > :geode-junit:sourcesJar
> > > :geode-junit:signArchives SKIPPED
> > > :geode-junit:assemble
> > > :geode-junit:compileTestJava
> > > :geode-junit:processTestResources UP-TO-DATE
> > > :geode-junit:testClasses
> > > :geode-junit:checkMissedTests
> > > :geode-junit:spotlessJavaCheck
> > > :geode-junit:spotlessCheck
> > > :geode-junit:test
> > > :geode-junit:check
> > > :geode-junit:build
> > > :geode-junit:distributedTest
> > > :geode-junit:flakyTest
> > > :geode-junit:integrationTest
> > > :geode-lucene:assemble
> > > :geode-lucene:compileTestJava
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-runner/2.4.
> > > 0/randomizedtesting-runner-2.4.0.pom
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-parent/2.4.
> > > 0/randomizedtesting-parent-2.4.0.pom
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-runner/2.4.
> > > 0/randomizedtesting-runner-2.4.0.jar
> > > Note: Some input files use or override a deprecated API.
> > > Note: Recompile with -Xlint:deprecation for details.
> > > Note: Some input files use unchecked or unsafe operations.
> > > Note: Recompile with -Xlint:unchecked for details.
> > > :geode-lucene:processTestResources
> > > :geode-lucene:testClasses
> > > :geode-lucene:checkMissedTests
> > > :geode-lucene:spotlessJavaCheck
> > > :geode-lucene:spotlessCheck
> > > :geode-lucene:test
> > > :geode-lucene:check
> > > :geode-lucene:build
> > > :geode-lucene:distributedTest
> > > :geode-lucene:flakyTest
> > > :geode-lucene:integrationTest
> > > :geode-old-client-support:assemble
> > > :geode-old-client-support:compileTestJava
> > > :geode-old-client-support:processTestResources UP-TO-DATE
> > > :geode-old-client-support:testClasses
> > > :geode-old-client-support:checkMissedTests
> > > :geode-old-client-support:spotlessJavaCheck
> > > :geode-old-client-support:spotlessCheck
> > > :geode-old-client-support:test
> > > :geode-old-client-support:check
> > > :geode-old-client-support:build
> > > :geode-old-client-support:distributedTest
> > > :geode-old-client-support:flakyTest
> > > :geode-old-client-support:integrationTest
> > > :geode-old-versions:javadoc UP-TO-DATE

[jira] [Updated] (GEODE-2436) Geode doesn't handle byte[] as key

2017-02-15 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra updated GEODE-2436:
---
Issue Type: Sub-task  (was: Improvement)
Parent: GEODE-2444

> Geode doesn't handle byte[] as key
> --
>
> Key: GEODE-2436
> URL: https://issues.apache.org/jira/browse/GEODE-2436
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Hitesh Khamesra
>
> Geode doesn't handle byte[] as key. "byte[]" doesn't implement 
> hashcode/equals method, it just returns native hashcode. Because of that 
> following code returns  null for key k2;
> {code}
> Cache c = CacheFactory.getAnyInstance();
>
> Region region = c.getRegion("primitiveKVStore");
>byte[] k1 = new byte[] {1,2};
>
> region.put(k1, k1);
> byte[] k2 = new byte[] {1,2};
> System.out.println(">> "  + region.get(k2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2469) Redis adapter Hash key support

2017-02-15 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra updated GEODE-2469:
---
Issue Type: Sub-task  (was: Bug)
Parent: GEODE-2444

> Redis adapter Hash key support
> --
>
> Key: GEODE-2469
> URL: https://issues.apache.org/jira/browse/GEODE-2469
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Gregory Green
>Assignee: Hitesh Khamesra
>
> The Redis adapter does not appear to handle hash keys correctly.
> The following Example: Redis CLI works.
> localhost:11211>  HSET companies name "John Smith"
> Using a  HSET :id  .. produces an error
> Example:
> localhost:11211>  HSET companies:1000 name "John Smith"
> [Server error]
> [fine 2017/02/10 16:04:33.289 EST server1  
> tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
> underscores: companies: 1000
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: companies: 1000
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> //Example Spring Data Redis Object sample
> @Data
> @EqualsAndHashCode()
> @RedisHash(value="companies")
> @NoArgsConstructor
> public class Company
> {
>   private @Id String id;
>
> //Repository
> public interface CompanyRepository extends CrudRepository 
> {
>  
> }
> //When saving using a repository
> repository.save(this.myCompany);
> [Same Server error]
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: 
> companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Dan Smith
Galen - I'm guessing you might be trying to click on the link below? I'm
not sure jenkins actually keeps that stuff - I think it's just looking for
file:/// urls in the console output and translating them to http:// urls.
You should be able to find the test results by going to the test result
link Mark sent out.

We might want consider putting a link to the test result or maybe the
status page in the emails that jenkins sends out.

* What went wrong:
Execution failed for task ':geode-core:distributedTest'.
> There were failing tests. See the report at: file://<
https://builds.apache.org/job/Geode-nightly/ws/geode-core/build/reports/
distributedTest/index.html>

On Wed, Feb 15, 2017 at 10:54 AM, Mark Bretl  wrote:

> Hi Galen,
>
> I am able to the console log [1], test report summary [2], and artifacts
> [3] without logging in. Which 'build report' are you trying to view?
>
> --Mark
>
> [1] https://builds.apache.org/job/Geode-nightly/749/console
> [2] https://builds.apache.org/job/Geode-nightly/749/testReport/
> [3] https://builds.apache.org/job/Geode-nightly/749/artifact/
>
> On Wed, Feb 15, 2017 at 9:04 AM, Galen M O'Sullivan  >
> wrote:
>
> > I don't seem to have access to see the build report. Is that restricted
> to
> > committers only?
> >
> > Thanks,
> > Galen
> >
> > On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> > > See 
> > >
> > > Changes:
> > >
> > > [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
> > >
> > > [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
> > >
> > > [jiliao] refactor ServerStarterRule and LocatorStarterRule so that they
> > > can be
> > >
> > > [gzhou] GEODE-2471: fix the race condition in test code.
> > >
> > > --
> > > [...truncated 713 lines...]
> > > :geode-cq:build
> > > :geode-cq:distributedTest
> > > :geode-cq:flakyTest
> > > :geode-cq:integrationTest
> > > :geode-json:assemble
> > > :geode-json:compileTestJava UP-TO-DATE
> > > :geode-json:processTestResources UP-TO-DATE
> > > :geode-json:testClasses UP-TO-DATE
> > > :geode-json:checkMissedTests UP-TO-DATE
> > > :geode-json:spotlessJavaCheck
> > > :geode-json:spotlessCheck
> > > :geode-json:test UP-TO-DATE
> > > :geode-json:check
> > > :geode-json:build
> > > :geode-json:distributedTest UP-TO-DATE
> > > :geode-json:flakyTest UP-TO-DATE
> > > :geode-json:integrationTest UP-TO-DATE
> > > :geode-junit:javadoc
> > > :geode-junit:javadocJar
> > > :geode-junit:sourcesJar
> > > :geode-junit:signArchives SKIPPED
> > > :geode-junit:assemble
> > > :geode-junit:compileTestJava
> > > :geode-junit:processTestResources UP-TO-DATE
> > > :geode-junit:testClasses
> > > :geode-junit:checkMissedTests
> > > :geode-junit:spotlessJavaCheck
> > > :geode-junit:spotlessCheck
> > > :geode-junit:test
> > > :geode-junit:check
> > > :geode-junit:build
> > > :geode-junit:distributedTest
> > > :geode-junit:flakyTest
> > > :geode-junit:integrationTest
> > > :geode-lucene:assemble
> > > :geode-lucene:compileTestJava
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-runner/2.4.
> > > 0/randomizedtesting-runner-2.4.0.pom
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-parent/2.4.
> > > 0/randomizedtesting-parent-2.4.0.pom
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > randomizedtesting/randomizedtesting-runner/2.4.
> > > 0/randomizedtesting-runner-2.4.0.jar
> > > Note: Some input files use or override a deprecated API.
> > > Note: Recompile with -Xlint:deprecation for details.
> > > Note: Some input files use unchecked or unsafe operations.
> > > Note: Recompile with -Xlint:unchecked for details.
> > > :geode-lucene:processTestResources
> > > :geode-lucene:testClasses
> > > :geode-lucene:checkMissedTests
> > > :geode-lucene:spotlessJavaCheck
> > > :geode-lucene:spotlessCheck
> > > :geode-lucene:test
> > > :geode-lucene:check
> > > :geode-lucene:build
> > > :geode-lucene:distributedTest
> > > :geode-lucene:flakyTest
> > > :geode-lucene:integrationTest
> > > :geode-old-client-support:assemble
> > > :geode-old-client-support:compileTestJava
> > > :geode-old-client-support:processTestResources UP-TO-DATE
> > > :geode-old-client-support:testClasses
> > > :geode-old-client-support:checkMissedTests
> > > :geode-old-client-support:spotlessJavaCheck
> > > :geode-old-client-su

[jira] [Updated] (GEODE-2473) redis-cli hangs if Geode unable to process command properly

2017-02-15 Thread Hitesh Khamesra (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Khamesra updated GEODE-2473:
---
Issue Type: Sub-task  (was: Bug)
Parent: GEODE-2444

> redis-cli hangs if Geode unable to process command properly
> ---
>
> Key: GEODE-2473
> URL: https://issues.apache.org/jira/browse/GEODE-2473
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Hitesh Khamesra
>
> Here is the command  "HSET companies:1000 name "John Smith""
> "GeodeRedisServer-WorkerThread-1" #86 prio=5 os_prio=0 tid=0x7f1a20002800 
> nid=0x4750 sleeping[0x7f1bf0dd9000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.geode.management.internal.cli.commands.CreateAlterDestroyRegionCommands.verifyDistributedRegionMbean(CreateAlterDestroyRegionCommands.java:410)
> at 
> org.apache.geode.management.internal.cli.commands.CreateAlterDestroyRegionCommands.createRegion(CreateAlterDestroyRegionCommands.java:371)
> at 
> org.apache.geode.redis.internal.RegionProvider.createRegionGlobally(RegionProvider.java:405)
> at 
> org.apache.geode.redis.internal.RegionProvider.getOrCreateRegion0(RegionProvider.java:292)
> at 
> org.apache.geode.redis.internal.RegionProvider.getOrCreateRegion(RegionProvider.java:212)
> at 
> org.apache.geode.redis.internal.executor.hash.HashExecutor.getOrCreateRegion(HashExecutor.java:31)
> at 
> org.apache.geode.redis.internal.executor.hash.HSetExecutor.executeCommand(HSetExecutor.java:48)
> at 
> org.apache.geode.redis.internal.ExecutionHandlerContext.executeWithoutTransaction(ExecutionHandlerContext.java:235)
> at 
> org.apache.geode.redis.internal.ExecutionHandlerContext.executeCommand(ExecutionHandlerContext.java:199)
> at 
> org.apache.geode.redis.internal.ExecutionHandlerContext.channelRead(ExecutionHandlerContext.java:139)
> at 
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:368)
> at 
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:353)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:173)
> at 
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:368)
> at 
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:353)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:780)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:100)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:497)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:465)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:359)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2449) Move redis adapter to extension framework

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868407#comment-15868407
 ] 

ASF subversion and git services commented on GEODE-2449:


Commit 2c70249cb44267ecda8bfa40e4ff1d808e4e1b50 in geode's branch 
refs/heads/feature/GEODE-2444 from [~bschuchardt]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=2c70249 ]

Merge branch 'feature/GEODE-2449' into feature/GEODE-2444


> Move redis adapter to extension framework
> -
>
> Key: GEODE-2449
> URL: https://issues.apache.org/jira/browse/GEODE-2449
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Addison
>Assignee: Udo Kohlmeyer
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2444) Redis Adapter Performance Improvements

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868408#comment-15868408
 ] 

ASF subversion and git services commented on GEODE-2444:


Commit 2c70249cb44267ecda8bfa40e4ff1d808e4e1b50 in geode's branch 
refs/heads/feature/GEODE-2444 from [~bschuchardt]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=2c70249 ]

Merge branch 'feature/GEODE-2449' into feature/GEODE-2444


> Redis Adapter Performance Improvements
> --
>
> Key: GEODE-2444
> URL: https://issues.apache.org/jira/browse/GEODE-2444
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Addison
> Fix For: 1.2.0
>
>
> The goal of this effort is to further test and complete the Redis Adapter to 
> make the code more readable and performant. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2486) SSL ciphers other than NULL not supported

2017-02-15 Thread Jacob S. Barrett (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob S. Barrett resolved GEODE-2486.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

> SSL ciphers other than NULL not supported
> -
>
> Key: GEODE-2486
> URL: https://issues.apache.org/jira/browse/GEODE-2486
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob S. Barrett
> Fix For: 1.2.0
>
>
> SSLImpl does not correctly initialize the OpenSSL library so ciphers other 
> than the NULL cipher can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2408) Refactor CacheableDate to use C++ std::chrono

2017-02-15 Thread Jacob S. Barrett (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob S. Barrett resolved GEODE-2408.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

> Refactor CacheableDate to use C++ std::chrono
> -
>
> Key: GEODE-2408
> URL: https://issues.apache.org/jira/browse/GEODE-2408
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Jacob S. Barrett
>Assignee: Jacob S. Barrett
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2449) Move redis adapter to extension framework

2017-02-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868419#comment-15868419
 ] 

ASF subversion and git services commented on GEODE-2449:


Commit 6d5b7a407bba6c7a96b03ac48507538fa03c8a83 in geode's branch 
refs/heads/feature/GEODE-2444 from [~bschuchardt]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=6d5b7a4 ]

GEODE-2449: move redis to its own module

Added redis module to geode-dependencies.jar


> Move redis adapter to extension framework
> -
>
> Key: GEODE-2449
> URL: https://issues.apache.org/jira/browse/GEODE-2449
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Addison
>Assignee: Udo Kohlmeyer
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2489) Tombstone message with keys are sent to peer partitioned region nodes even though no clinets are registered

2017-02-15 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2489:


 Summary: Tombstone message with keys are sent to peer partitioned 
region nodes even though no clinets are registered
 Key: GEODE-2489
 URL: https://issues.apache.org/jira/browse/GEODE-2489
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


Tombstone:
As part of consistency checking,  when an entry is destroyed, the member 
temporarily retains the entry to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a tombstone. 

When tombstones are removed, tombstone messages are sent to region replicas; 
and in case of Partitioned Region (PR) messages are also sent to peer region 
nodes for client events.

Currently tombstone messages meant for clients that have all the keys removed 
are getting sent to peer PR nodes even though no clients are registered on 
those peers.

Based on the number tombstone keys processed (by default 10) this could be 
large message sent to peer node which could impact the performance of the 
system/cluster.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode issue #398: Split the redis adapter into its own package

2017-02-15 Thread bschuchardt
Github user bschuchardt commented on the issue:

https://github.com/apache/geode/pull/398
  
@metatype there are no changes in the netty NOTICE.txt between 4.1.7 and 
4.1.8


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Build failed in Jenkins: Geode-nightly #749

2017-02-15 Thread Galen M O'Sullivan
ah, thanks Dan. I was looking in the wrong place.

On Wed, Feb 15, 2017 at 11:12 AM, Dan Smith  wrote:

> Galen - I'm guessing you might be trying to click on the link below? I'm
> not sure jenkins actually keeps that stuff - I think it's just looking for
> file:/// urls in the console output and translating them to http:// urls.
> You should be able to find the test results by going to the test result
> link Mark sent out.
>
> We might want consider putting a link to the test result or maybe the
> status page in the emails that jenkins sends out.
>
> * What went wrong:
> Execution failed for task ':geode-core:distributedTest'.
> > There were failing tests. See the report at: file://<
> https://builds.apache.org/job/Geode-nightly/ws/geode-core/build/reports/
> distributedTest/index.html>
>
> On Wed, Feb 15, 2017 at 10:54 AM, Mark Bretl  wrote:
>
> > Hi Galen,
> >
> > I am able to the console log [1], test report summary [2], and artifacts
> > [3] without logging in. Which 'build report' are you trying to view?
> >
> > --Mark
> >
> > [1] https://builds.apache.org/job/Geode-nightly/749/console
> > [2] https://builds.apache.org/job/Geode-nightly/749/testReport/
> > [3] https://builds.apache.org/job/Geode-nightly/749/artifact/
> >
> > On Wed, Feb 15, 2017 at 9:04 AM, Galen M O'Sullivan <
> gosulli...@pivotal.io
> > >
> > wrote:
> >
> > > I don't seem to have access to see the build report. Is that restricted
> > to
> > > committers only?
> > >
> > > Thanks,
> > > Galen
> > >
> > > On Wed, Feb 15, 2017 at 8:08 AM, Apache Jenkins Server <
> > > jenk...@builds.apache.org> wrote:
> > >
> > > > See 
> > > >
> > > > Changes:
> > > >
> > > > [kmiller] GEODE-2479 Remove docs reference to gemstone.com package
> > > >
> > > > [jiliao] GEODE-2474: mark NetstatDUnitTest as flaky
> > > >
> > > > [jiliao] refactor ServerStarterRule and LocatorStarterRule so that
> they
> > > > can be
> > > >
> > > > [gzhou] GEODE-2471: fix the race condition in test code.
> > > >
> > > > --
> > > > [...truncated 713 lines...]
> > > > :geode-cq:build
> > > > :geode-cq:distributedTest
> > > > :geode-cq:flakyTest
> > > > :geode-cq:integrationTest
> > > > :geode-json:assemble
> > > > :geode-json:compileTestJava UP-TO-DATE
> > > > :geode-json:processTestResources UP-TO-DATE
> > > > :geode-json:testClasses UP-TO-DATE
> > > > :geode-json:checkMissedTests UP-TO-DATE
> > > > :geode-json:spotlessJavaCheck
> > > > :geode-json:spotlessCheck
> > > > :geode-json:test UP-TO-DATE
> > > > :geode-json:check
> > > > :geode-json:build
> > > > :geode-json:distributedTest UP-TO-DATE
> > > > :geode-json:flakyTest UP-TO-DATE
> > > > :geode-json:integrationTest UP-TO-DATE
> > > > :geode-junit:javadoc
> > > > :geode-junit:javadocJar
> > > > :geode-junit:sourcesJar
> > > > :geode-junit:signArchives SKIPPED
> > > > :geode-junit:assemble
> > > > :geode-junit:compileTestJava
> > > > :geode-junit:processTestResources UP-TO-DATE
> > > > :geode-junit:testClasses
> > > > :geode-junit:checkMissedTests
> > > > :geode-junit:spotlessJavaCheck
> > > > :geode-junit:spotlessCheck
> > > > :geode-junit:test
> > > > :geode-junit:check
> > > > :geode-junit:build
> > > > :geode-junit:distributedTest
> > > > :geode-junit:flakyTest
> > > > :geode-junit:integrationTest
> > > > :geode-lucene:assemble
> > > > :geode-lucene:compileTestJava
> > > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.pom
> > > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.pom
> > > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > > randomizedtesting/randomizedtesting-runner/2.4.
> > > > 0/randomizedtesting-runner-2.4.0.pom
> > > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > > randomizedtesting/randomizedtesting-parent/2.4.
> > > > 0/randomizedtesting-parent-2.4.0.pom
> > > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > > lucene-test-framework/6.4.1/lucene-test-framework-6.4.1.jar
> > > > Download https://repo1.maven.org/maven2/org/apache/lucene/
> > > > lucene-codecs/6.4.1/lucene-codecs-6.4.1.jar
> > > > Download https://repo1.maven.org/maven2/com/carrotsearch/
> > > > randomizedtesting/randomizedtesting-runner/2.4.
> > > > 0/randomizedtesting-runner-2.4.0.jar
> > > > Note: Some input files use or override a deprecated API.
> > > > Note: Recompile with -Xlint:deprecation for details.
> > > > Note: Some input files use unchecked or unsafe operations.
> > > > Note: Recompile with -Xlint:unchecked for details.
> > > > :geode-lucene:processTestResources
> > > > :geode-lucene:testClasses
> > > > :geode-lucene:checkMissedTests
> > > > :geode-lucene:spotlessJavaCheck
> > > > :geode-lucene:spotlessCheck
> > > > :geode-lucene:test
> > > > :geode-lucene:check
> > > > :geode-lucene:build
> > > > :geode-lucene:distributedTest
> > > > :geod

[GitHub] geode issue #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on the issue:

https://github.com/apache/geode/pull/398
  
@metatype The import ordering is the rather unfortunate result of 
IntelliJ's import cleanup. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (GEODE-84) Extract Redis adaptor from core

2017-02-15 Thread Swapnil Bawaskar (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-84?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Bawaskar closed GEODE-84.
-

> Extract Redis adaptor from core
> ---
>
> Key: GEODE-84
> URL: https://issues.apache.org/jira/browse/GEODE-84
> Project: Geode
>  Issue Type: Task
>  Components: extensions
>Reporter: Swapnil Bawaskar
>  Labels: experimental, gsoc2016
>
> The Redis adaptor [Geode-46| https://issues.apache.org/jira/browse/GEODE-46] 
> is part of the final code drop in sga2 branch. However it is in gemfire-core 
> directory and needs to be extracted into a new top-level directory 
> gemfire-redis-adaptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-84) Extract Redis adaptor from core

2017-02-15 Thread Swapnil Bawaskar (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-84?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Bawaskar resolved GEODE-84.
---
Resolution: Duplicate

> Extract Redis adaptor from core
> ---
>
> Key: GEODE-84
> URL: https://issues.apache.org/jira/browse/GEODE-84
> Project: Geode
>  Issue Type: Task
>  Components: extensions
>Reporter: Swapnil Bawaskar
>  Labels: experimental, gsoc2016
>
> The Redis adaptor [Geode-46| https://issues.apache.org/jira/browse/GEODE-46] 
> is part of the final code drop in sga2 branch. However it is in gemfire-core 
> directory and needs to be extracted into a new top-level directory 
> gemfire-redis-adaptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Gregory Green
Hitesh and Team,

Also, I think geospatial support in core Gemfire that can be exposed thru
the following Redis GEO... commands would be great

GEOADD
GEODIST
GEOHASH
GEOPOS
GEORADIUS
GEORADIUSBYMEMBER




On Wed, Feb 15, 2017 at 10:48 AM, Gregory Green  wrote:

> Hello Hitesh,
>
> The following is my feedback.
>
> *1. Redis Type String*
>   I like the idea of creating a region upfront. If we are still using the
> convention that internal region names start with "__" , then I would
> suggest something like a region named "__RedisString region"
>
> *2. List Type*
>
> I propose using a single partition region (ex: "__RedisList") for the List
> commands.
>
> Region> region;
>
> //Note: ByteArrayWrapper is what the current RedisAdapter uses as its data
> type. It converts strings to bytes using UTF8 encoding
>
> Example Redis commands
>
> RPUSH mylist A =>
>
>  Region> region = 
> getRegion("__RedisList")
>  List list = getOrCreateList(mylist);
>  list.add(A)
>  region.put(mylist,list)
>
> *3. Hashes*
>
> Based on my Spring Data Redis testing for Hash/object support.
>
> HMSET and similar Hash commands are submitted in the following format:
> HMSET region:key [field value]+ I proposed creating regions with the
> following format:
>
> Region> region;
>
> Also see Hashes section at the following URL https://redis.io/topics/da
> ta-types
>
> Example Redis command:
>
> HMSET companies:100 _class io.pivotal.redis.gemfire.example.repository.Company
> id 100 name nylaInc email i...@nylainc.io website nylaInc.io taxID id:1
> address.address1 address1 address.address2 address2 address.cityTown
> cityTown address.stateProvince stateProvince address.zip zip
> address.country country
>
> =>
>
> //Pseudo Access code
> Region> 
> companiesRegion = getRegion("companies")
> companiesRegion.put(100, toMap(fieldValues))
>
> //--
>
> // HGETALL region:key
>
> HGETALL companies:100 =>
>
> Region> companiesRegion = getRegion("companies")
> return companiesRegion.get(100)
>
> //HSET region:key field value
>
> HSET companies:100 email upda...@pivotal.io =>
>
> Region> companiesRegion = getRegion("companies");
> Map map = companiesRegion.get(100)
> map.set(email,upda...@pivotal.io)
> companiesRegion.put(100,map);
>
> FYI - I started to implement this and hope to submit a pull request soon
> related to GEODE-2469.
>
>
> *4. Set*
>
> I propose using a single partition region (ex: __RedisSET) for the SET
> commands.
>
> Region> region;
>
> Example Redis commands
>
> SADD myset "Hello" =>
>
> Region> region = 
> getRegion("__RedisSET");
> Set set = region(myset)
> boolean bool = set.add(Hello)
> if(bool){
>   region.put(myset,set)
> }
> return bool;
>
> SMEMBERS myset "Hello" =>
>
> Region> region = 
> getRegion("_RedisSET");
> Set set = region(myset)
> return set.contains(Hello)s
>
> FYI - I started to implement this and hope to submit a pull request soon
> related to GEODE-2469.
>
>
> *5. SortedSets *
>
> I propose using a single partition region for the SET commands.
>
> Region> region;
>
> 6. Default config for geode-region (vote)
>
> I think the default setting should be partitioned with persistence and no
> redundant copies.
>
> 7. It seems; redis knows type(list, hashes, string ,set ..) of each key...
>
> I suggested most operations can assume all keys are strings in UTF8 byte
> encoding, not sure if there are any mathematical number based Redis
> commands that need numbers.
>
> *8. Transactions:*
>
> +1 I agree to not support transactions
>
> *9. Redis COMMAND* (https://redis.io/commands/comman
> 
>
> +1 for implementing the "COMMAND"
>
>
> -- Forwarded message --
> From: Hitesh Khamesra 
> Date: Tue, Feb 14, 2017 at 5:36 PM
> Subject: GeodeRedisAdapter improvments/feedback
> To: Geode , "u...@geode.apache.org" <
> u...@geode.apache.org>
>
>
> Current GeodeRedisAdapter implementation is based on
> https://cwiki.apache.org/confluence/display/GEODE/Geode+Redi
> s+Adapter+Proposal.
> We are looking for some feedback on Redis commands and their mapping to
> geode region.
>
> 1. Redis Type String
>   a. Usage Set k1 v1
>   b. Current implementation creates "STRING_REGION" geode-partition-region
> upfront
>   c. This k1/v1 are geode-region key/value
>   d. Any feedback?
>
> 2. List Type
>   a. usage "rpush mylist A"
>   b. Current implementation maps each list to geode-partition-region(i.e.
> mylist is geode-partition-region); with the ability to get item from
> head/tail
>   c. Feedback/vote
>   -- List type operation at region-entry level;
>   -- region-key = "mylist"
>   -- region-value = Arraylist (will support all redis list ops)
>   d. Feedback/vote: both behavior is desirable
>
>
> 3. Hashes
>   a. this represents field-value or java bean object
>   b. usage "hmset user1000 username antirez birthyear 1977 verified 1"
>   c. Current implementation maps each hashes to
> geode-partition-region(i.e. user1000 is geode-partition-region)
>   d. Feedb

[GitHub] geode issue #398: Split the redis adapter into its own package

2017-02-15 Thread metatype
Github user metatype commented on the issue:

https://github.com/apache/geode/pull/398
  
Also, I believe redis is started by providing an option to `start server`.  
How do you plan to extract that from `geode-core`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101374277
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
---
@@ -1330,21 +1360,12 @@ private void startMemcachedServer() {
   }
 
   private void startRedisServer() {
-int port = system.getConfig().getRedisPort();
-if (port != 0) {
-  String bindAddress = system.getConfig().getRedisBindAddress();
-  assert bindAddress != null;
-  if 
(bindAddress.equals(DistributionConfig.DEFAULT_REDIS_BIND_ADDRESS)) {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_PORT_0,
-new Object[] {port});
-  } else {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_BIND_ADDRESS_0_PORT_1,
-new Object[] {bindAddress, port});
-  }
-  this.redisServer = new GeodeRedisServer(bindAddress, port);
-  this.redisServer.start();
+GeodeRedisService geodeRedisService = 
getService(GeodeRedisService.class);
--- End diff --

@metatype The plan is to have the Redis server registered as a service that 
`GemFireCacheImpl` will start when the right properties are set. 
`GeodeRedisService` is in core, but implemented by `GeodeRedisServiceImpl` as 
long as geode-redis is on the classpath.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Swapnil Bawaskar
@John The intention behind this proposal is to make Geode client
development easy for new users. So, looking at this as a new user, I would
say that having to "create" a PROXY region only to find out that it does
nothing on the server, is more confusing than an overloaded getRegion().

To summarize, the proposal for getRegion() is:
1. lookup if the region exists already and return it; this applies to
regions that have been created through API and cache.xml. This is the
current behavior.
2. If the region does not exist:
2.a. check if it exists on the server, if so create a PROXY region under
the covers and return it. Do this only on the client
2.b. If it does not exist on the server, throw an exception.



On Wed, Feb 15, 2017 at 9:38 AM John Blum  wrote:

> @Eric-
>
> Hmm...
>
> Well, I'd argue that it is still confusing to "*overload*" the purpose of
> getRegion("path") to dually "*get*" (the primary function/purpose) and also
> "*create*" (secondary).
>
> I'd also say that the getRegion("path") API call is not exclusive to a
> *ClientCache*, particularly since getRegion("path") is on RegionService
> <
> http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionService.html#getRegion(java.lang.String)
> >
> [1],
> which both ClientCache and Cache implement, indirectly through
> GemFireCache,
> I might add.  Therefore, getRegion("path") has a completely different
> meaning server-side (or in the embedded "peer cache" UC).
>
> -j
>
> [1]
>
> http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionService.html#getRegion(java.lang.String)
>
> On Wed, Feb 15, 2017 at 9:29 AM, Anthony Baker  wrote:
>
> > Introducing an API like this gives us the opportunity to split the
> > client/server region API’s.  I don’t think we should return Region, but
> > something specific to “server view”.  How would those API’s operate on a
> > CACHING_PROXY?
> >
> > Anthony
> >
> > > On Feb 15, 2017, at 6:44 AM, Swapnil Bawaskar 
> > wrote:
> > >
> > > /**
> > > * @return
> > > */
> > > Region serverView();
> > >
> >
> >
>
>
> --
> -John
> john.blum10101 (skype)
>


[jira] [Commented] (GEODE-2464) Review Redis tests

2017-02-15 Thread Galen O'Sullivan (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868511#comment-15868511
 ] 

Galen O'Sullivan commented on GEODE-2464:
-

We'll also need some manual tests to make sure that GFSH still works, 
geode-redis is getting included in the build product, etc.

> Review Redis tests
> --
>
> Key: GEODE-2464
> URL: https://issues.apache.org/jira/browse/GEODE-2464
> Project: Geode
>  Issue Type: Sub-task
>  Components: redis
>Reporter: Galen O'Sullivan
> Fix For: 1.2.0
>
>
> The existing Redis tests could use some cleanup and probably expansion.
> * [~ukohlmeyer] and I ([~gosullivan]) did some common test code with 
> `RedisTestBase`; there is probably some room for improvement there.
> * There is a lot of repetition in the test code for the Redis adapter. We can 
> improve this.
> * Randomization: use junit-quickcheck {{@Property}} tests?
> * Make sure every Redis command gets tested. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2489) Tombstone message with keys are sent to peer partitioned region nodes even though no clinets are registered

2017-02-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2489:


Assignee: Anilkumar Gingade

> Tombstone message with keys are sent to peer partitioned region nodes even 
> though no clinets are registered
> ---
>
> Key: GEODE-2489
> URL: https://issues.apache.org/jira/browse/GEODE-2489
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> Tombstone:
> As part of consistency checking,  when an entry is destroyed, the member 
> temporarily retains the entry to detect possible conflicts with operations 
> that have occurred. The retained entry is referred to as a tombstone. 
> When tombstones are removed, tombstone messages are sent to region replicas; 
> and in case of Partitioned Region (PR) messages are also sent to peer region 
> nodes for client events.
> Currently tombstone messages meant for clients that have all the keys removed 
> are getting sent to peer PR nodes even though no clients are registered on 
> those peers.
> Based on the number tombstone keys processed (by default 10) this could 
> be large message sent to peer node which could impact the performance of the 
> system/cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2490) Tombstone messages are getting processed inline

2017-02-15 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2490:


 Summary: Tombstone messages are getting processed inline
 Key: GEODE-2490
 URL: https://issues.apache.org/jira/browse/GEODE-2490
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


Tombstone:
As part of consistency checking, when an entry is destroyed, the member 
temporarily retains the entry to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a tombstone.

When tombstones are removed, tombstone messages are sent to region replicas; 
and in case of Partitioned Region (PR) messages are also sent to peer region 
nodes for client events.

Currently the tombstone message sent for replicas are getting processed 
in-line. Based on the number of nodes in the cluster, this may take long time 
to process, impacting other cache operation that required to be processed 
in-line. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread metatype
Github user metatype commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101375546
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
---
@@ -1330,21 +1360,12 @@ private void startMemcachedServer() {
   }
 
   private void startRedisServer() {
-int port = system.getConfig().getRedisPort();
-if (port != 0) {
-  String bindAddress = system.getConfig().getRedisBindAddress();
-  assert bindAddress != null;
-  if 
(bindAddress.equals(DistributionConfig.DEFAULT_REDIS_BIND_ADDRESS)) {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_PORT_0,
-new Object[] {port});
-  } else {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_BIND_ADDRESS_0_PORT_1,
-new Object[] {bindAddress, port});
-  }
-  this.redisServer = new GeodeRedisServer(bindAddress, port);
-  this.redisServer.start();
+GeodeRedisService geodeRedisService = 
getService(GeodeRedisService.class);
--- End diff --

Is there a reason why we can't extract *all* redis code into `geode-redis`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (GEODE-2490) Tombstone messages are getting processed inline

2017-02-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2490:


Assignee: Anilkumar Gingade

> Tombstone messages are getting processed inline
> ---
>
> Key: GEODE-2490
> URL: https://issues.apache.org/jira/browse/GEODE-2490
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> Tombstone:
> As part of consistency checking, when an entry is destroyed, the member 
> temporarily retains the entry to detect possible conflicts with operations 
> that have occurred. The retained entry is referred to as a tombstone.
> When tombstones are removed, tombstone messages are sent to region replicas; 
> and in case of Partitioned Region (PR) messages are also sent to peer region 
> nodes for client events.
> Currently the tombstone message sent for replicas are getting processed 
> in-line. Based on the number of nodes in the cluster, this may take long time 
> to process, impacting other cache operation that required to be processed 
> in-line. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: PROXY and CACHING_PROXY regions on Client

2017-02-15 Thread Swapnil Bawaskar
Re: breaking existing code; we could throw UnsupportedOperationException
for these two methods:
- invalidateRegion()
- destroyRegion()

I do not see anyone using/depending on invalidateRegion, since the behavior
currently is a no-op.
destroyRegion currently only gets rid of the handle to the region, so it is
most likely being used when the client is shutting down, so fixing existing
applications should be straight forward.

On Wed, Feb 15, 2017 at 12:27 PM Swapnil Bawaskar 
wrote:

> @John The intention behind this proposal is to make Geode client
> development easy for new users. So, looking at this as a new user, I would
> say that having to "create" a PROXY region only to find out that it does
> nothing on the server, is more confusing than an overloaded getRegion().
>
> To summarize, the proposal for getRegion() is:
> 1. lookup if the region exists already and return it; this applies to
> regions that have been created through API and cache.xml. This is the
> current behavior.
> 2. If the region does not exist:
> 2.a. check if it exists on the server, if so create a PROXY region under
> the covers and return it. Do this only on the client
> 2.b. If it does not exist on the server, throw an exception.
>
>
>
> On Wed, Feb 15, 2017 at 9:38 AM John Blum  wrote:
>
> @Eric-
>
> Hmm...
>
> Well, I'd argue that it is still confusing to "*overload*" the purpose of
> getRegion("path") to dually "*get*" (the primary function/purpose) and also
> "*create*" (secondary).
>
> I'd also say that the getRegion("path") API call is not exclusive to a
> *ClientCache*, particularly since getRegion("path") is on RegionService
> <
> http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionService.html#getRegion(java.lang.String)
> >
> [1],
> which both ClientCache and Cache implement, indirectly through
> GemFireCache,
> I might add.  Therefore, getRegion("path") has a completely different
> meaning server-side (or in the embedded "peer cache" UC).
>
> -j
>
> [1]
>
> http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionService.html#getRegion(java.lang.String)
>
> On Wed, Feb 15, 2017 at 9:29 AM, Anthony Baker  wrote:
>
> > Introducing an API like this gives us the opportunity to split the
> > client/server region API’s.  I don’t think we should return Region, but
> > something specific to “server view”.  How would those API’s operate on a
> > CACHING_PROXY?
> >
> > Anthony
> >
> > > On Feb 15, 2017, at 6:44 AM, Swapnil Bawaskar 
> > wrote:
> > >
> > > /**
> > > * @return
> > > */
> > > Region serverView();
> > >
> >
> >
>
>
> --
> -John
> john.blum10101 (skype)
>
>


Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Hitesh Khamesra
>>>The Redis adapter was designed so that we can scale all the Redis data
structures horizontally. If you bring the data structures to region entry
level, there is no reason for anyone to use our implementation over Redis.
hmm, here we need to understand when we need to create partition-region for 
Redis data types(item 2,3,4,5)
Creating partition-region for each use case may not be feasible. See a couple 
of use-cases mentioned earlier in the thread. 



 
On Tue, Feb 14, 2017 at 3:15 PM Jason Huynh  wrote:

> Hi Hitesh,
>
> Not sure about everyone else, but I had a hard time reading this,  however
> I think I figured out what you were describing... the only part I still am
> unsure about is  Feedback/vote: both behaviour is desirable.  Do you mean
> you want feedback and voting on whether both behaviors are desired?  As in
> old implementation and new implementation?
>
> 2,3,4)  The new implementation would mean all the data for a specific data
> structure is contained in a single bucket.  So the individual data
> structures are not quite scalable.  How would you allow scaling of a single
> data structure?
>
> On Tue, Feb 14, 2017 at 3:05 PM Real Wes  wrote:
>
> In what format do you want the feedback Hitesh?  For now I’ll just comment:
>
> 1. Redis Type String
> No comments except that a future Geode value-add would be to extend the
> Jedis client so that the K/V’s are not compressed. In this way OQL and CQ
> will work.  The tradeoff of this is that the data cannot be read by a
> native redis client but for Geode users it’s great. Call the new client
> Geodis.
>
> 2. List/ Hash/ Set/ SortedSet
> Creating a separate region for each creates a constraint that the keys are
> limited to the characters for region names, which are A-z/0-9/ - and _.
> Everything else is out. Redis users might start asking questions why their
> list named ++^^/## throws an error. Your suggestion to make it a key rather
> than a region solves this. Furthermore, creating a new region every time a
> new Redis collection is created is going to be slow. I’m not sure why a
> region was created but I’m sure it made sense to the developer at the time.
>
> 7. Default Config
> Can’t we configure a gfsh option to default to the region types we want?
> Customer A will want PARTITION but Customer B will want
> PARTITION_REDUNDANT_EXPIRATION_PERSISTENT.  I wonder if we can consider a
> geode> create region —redisType=PARTITION_REDUNDANT_EXPIRATION_PERSISTENT
> that makes _all_ Redis regions of that type?
>
>
>
> On Feb 14, 2017, at 5:36 PM, Hitesh Khamesra  hitesh...@yahoo.com>> wrote:
>
> Current GeodeRedisAdapter implementation is based on
> https://cwiki.apache.org/confluence/display/GEODE/Geode+Redis+Adapter+Proposal
> .
> We are looking for some feedback on Redis commands and their mapping to
> geode region.
>
> 1. Redis Type String
>  a. Usage Set k1 v1
>  b. Current implementation creates "STRING_REGION" geode-partition-region
> upfront
>  c. This k1/v1 are geode-region key/value
>  d. Any feedback?
>
> 2. List Type
>  a. usage "rpush mylist A"
>  b. Current implementation maps each list to geode-partition-region(i.e.
> mylist is geode-partition-region); with the ability to get item from
> head/tail
>  c. Feedback/vote
>      -- List type operation at region-entry level;
>      -- region-key = "mylist"
>      -- region-value = Arraylist (will support all redis list ops)
>  d. Feedback/vote: both behavior is desirable
>
>
> 3. Hashes
>  a. this represents field-value or java bean object
>  b. usage "hmset user1000 username antirez birthyear 1977 verified 1"
>  c. Current implementation maps each hashes to
> geode-partition-region(i.e. user1000 is geode-partition-region)
>  d. Feedback/vote
>    -- Should we map hashes to region-entry
>    -- region-key = user1000
>    -- region-value = map
>    -- This will provide java bean sort to behaviour with 10s of
> field-value
>    -- Personally I would prefer this..
>  e. Feedback/vote: both behaviour is desirable
>
> 4. Sets
>  a. This represents unique keys in set
>  b. usage "sadd myset 1 2 3"
>  c. Current implementation maps each sadd to geode-partition-region(i.e.
> myset is geode-partition-region)
>  d. Feedback/vote
>    -- Should we map set to region-entry
>    -- region-key = myset
>    -- region-value = Hashset
>  e. Feedback/vote: both behaviour is desirable
>
> 5. SortedSets
>  a. This represents unique keys in set with score (usecase Query top-10)
>  b. usage "zadd hackers 1940 "Alan Kay""
>  c. Current implementation maps each zadd to geode-partition-region(i.e.
> hackers is geode-partition-region)
>  d. Feedback/vote
>    -- Should we map set to region-entry
>    -- region-key = hackers
>    -- region-value = Sorted Hashset
>  e. Feedback/vote: both behaviour is desirable
>
> 6. HyperLogLogs
>  a. A HyperLogLog is a probabilistic data structure used in order to
> count unique things (technically this is referred to estimating the
> cardinality of a set).
>  b. usage "pfadd h

Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Real Wes

We should be careful here on a decision.  If we start replicating fat 
lists/sets/hash maps synchronously every update, the Geode user will complain 
how slow the API is compared with Redis. Note: Redis replicates asynchronously. 
  For fat collections we’re better off creating a region and suffering the 
penalty of constraining the key name to region name constraints (no colons, 
only alphanumerics, etc).


On Feb 15, 2017, at 3:24 PM, Hitesh Khamesra 
mailto:hitesh...@yahoo.com>> wrote:

>>>The Redis adapter was designed so that we can scale all the Redis data
structures horizontally. If you bring the data structures to region entry
level, there is no reason for anyone to use our implementation over Redis.

hmm, here we need to understand when we need to create partition-region for 
Redis data types(item 2,3,4,5)
Creating partition-region for each use case may not be feasible. See a couple 
of use-cases mentioned earlier in the thread.





On Tue, Feb 14, 2017 at 3:15 PM Jason Huynh 
mailto:jhu...@pivotal.io>> wrote:

> Hi Hitesh,
>
> Not sure about everyone else, but I had a hard time reading this,  however
> I think I figured out what you were describing... the only part I still am
> unsure about is  Feedback/vote: both behaviour is desirable.  Do you mean
> you want feedback and voting on whether both behaviors are desired?  As in
> old implementation and new implementation?
>
> 2,3,4)  The new implementation would mean all the data for a specific data
> structure is contained in a single bucket.  So the individual data
> structures are not quite scalable.  How would you allow scaling of a single
> data structure?
>
> On Tue, Feb 14, 2017 at 3:05 PM Real Wes 
> mailto:thereal...@outlook.com>> wrote:
>
> In what format do you want the feedback Hitesh?  For now I’ll just comment:
>
> 1. Redis Type String
> No comments except that a future Geode value-add would be to extend the
> Jedis client so that the K/V’s are not compressed. In this way OQL and CQ
> will work.  The tradeoff of this is that the data cannot be read by a
> native redis client but for Geode users it’s great. Call the new client
> Geodis.
>
> 2. List/ Hash/ Set/ SortedSet
> Creating a separate region for each creates a constraint that the keys are
> limited to the characters for region names, which are A-z/0-9/ - and _.
> Everything else is out. Redis users might start asking questions why their
> list named ++^^/## throws an error. Your suggestion to make it a key rather
> than a region solves this. Furthermore, creating a new region every time a
> new Redis collection is created is going to be slow. I’m not sure why a
> region was created but I’m sure it made sense to the developer at the time.
>
> 7. Default Config
> Can’t we configure a gfsh option to default to the region types we want?
> Customer A will want PARTITION but Customer B will want
> PARTITION_REDUNDANT_EXPIRATION_PERSISTENT.  I wonder if we can consider a
> geode> create region —redisType=PARTITION_REDUNDANT_EXPIRATION_PERSISTENT
> that makes _all_ Redis regions of that type?
>
>
>
> On Feb 14, 2017, at 5:36 PM, Hitesh Khamesra 
> mailto:hitesh...@yahoo.com> hitesh...@yahoo.com>> wrote:
>
> Current GeodeRedisAdapter implementation is based on
> https://cwiki.apache.org/confluence/display/GEODE/Geode+Redis+Adapter+Proposal
> .
> We are looking for some feedback on Redis commands and their mapping to
> geode region.
>
> 1. Redis Type String
>  a. Usage Set k1 v1
>  b. Current implementation creates "STRING_REGION" geode-partition-region
> upfront
>  c. This k1/v1 are geode-region key/value
>  d. Any feedback?
>
> 2. List Type
>  a. usage "rpush mylist A"
>  b. Current implementation maps each list to geode-partition-region(i.e.
> mylist is geode-partition-region); with the ability to get item from
> head/tail
>  c. Feedback/vote
>  -- List type operation at region-entry level;
>  -- region-key = "mylist"
>  -- region-value = Arraylist (will support all redis list ops)
>  d. Feedback/vote: both behavior is desirable
>
>
> 3. Hashes
>  a. this represents field-value or java bean object
>  b. usage "hmset user1000 username antirez birthyear 1977 verified 1"
>  c. Current implementation maps each hashes to
> geode-partition-region(i.e. user1000 is geode-partition-region)
>  d. Feedback/vote
>-- Should we map hashes to region-entry
>-- region-key = user1000
>-- region-value = map
>-- This will provide java bean sort to behaviour with 10s of
> field-value
>-- Personally I would prefer this..
>  e. Feedback/vote: both behaviour is desirable
>
> 4. Sets
>  a. This represents unique keys in set
>  b. usage "sadd myset 1 2 3"
>  c. Current implementation maps each sadd to geode-partition-region(i.e.
> myset is geode-partition-region)
>  d. Feedback/vote
>-- Should we map set to region-entry
>-- region-key = myset
>-- region-value = Hashset
>  e. Feedback/vote: both behaviour is

[jira] [Assigned] (GEODE-2491) Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread Jason Huynh (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-2491:
--

Assignee: Jason Huynh

> Reduce logging of handled exceptions in LuceneEventListener and 
> LuceneBucketListeners
> -
>
> Key: GEODE-2491
> URL: https://issues.apache.org/jira/browse/GEODE-2491
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>
> Currently we handle specific exception types but continue to log them as 
> warnings.  Instead we should probably log them at a debug level so they won't 
> show up in regular usage of the product because we do expect these exceptions 
> to be thrown/caught for certain scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2491) Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread Jason Huynh (JIRA)
Jason Huynh created GEODE-2491:
--

 Summary: Reduce logging of handled exceptions in 
LuceneEventListener and LuceneBucketListeners
 Key: GEODE-2491
 URL: https://issues.apache.org/jira/browse/GEODE-2491
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Jason Huynh


Currently we handle specific exception types but continue to log them as 
warnings.  Instead we should probably log them at a debug level so they won't 
show up in regular usage of the product because we do expect these exceptions 
to be thrown/caught for certain scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2424) afterSecondary call needs to handle specific exception rather than generic exception

2017-02-15 Thread Jason Huynh (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-2424.

Resolution: Duplicate
  Assignee: Jason Huynh

Going to close this one as it is related to a larger effort: GEODE-2491
Reduce logging of handled exceptions in LuceneEventListener and 
LuceneBucketListeners.



> afterSecondary call needs to handle specific exception rather than generic 
> exception
> 
>
> Key: GEODE-2424
> URL: https://issues.apache.org/jira/browse/GEODE-2424
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: Jason Huynh
>
> {code:title=LuceneBucketListener.java|borderStyle=solid}
>   public void afterSecondary(int bucketId) {
> dm.getWaitingThreadPool().execute(() -> {
>   try {
> lucenePartitionRepositoryManager.computeRepository(bucketId);
>   } catch (Exception e) {
> logger.warn("Exception while cleaning up Lucene Index Repository", e);
>   }
> });
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Gradle build for idea

2017-02-15 Thread Kirk Lund
I think this sort of error message would indicate that we have something
wrong in our gradle files. So, yes we should be concerned and fix it.
Unfortunately, I don't know how to fix it...


On Tue, Feb 14, 2017 at 1:36 PM, Udo Kohlmeyer 
wrote:

> Ok... just to clarify... I have imported the project into Idea using the
> build in gradle support.
>
> But when I run the idea command on command line, that is when I see the
> failure. I was wondering if we should be concerned about this...
>
> --Udo
>
>
>
> On 2/14/17 13:33, Jinmei Liao wrote:
>
>> I do not need to run gradle command in order to use IDEA. I just imported
>> those modules, and IDEA will sort things out on its own.
>>
>> On Tue, Feb 14, 2017 at 1:20 PM, Udo Kohlmeyer  wrote:
>>
>> Hi there,
>>>
>>> When I run `gradle idea` the following exception is thrown.
>>>
>>> * What went wrong:
>>> Execution failed for task ':geode-core:ideaModule'.
>>>
 Cannot change dependencies of configuration ':geode-core:antlr' after it

>>> has been included in dependency resolution.
>>>
>>> Is this something that we can resolve? Any idea what could be causing
>>> this
>>> failure?
>>>
>>> --Udo
>>>
>>>
>>>
>>
>


Review Request 56719: GEODE-2491: Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread Jason Huynh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56719/
---

Review request for geode, Lynn Hughes-Godfrey, nabarun nag, and Dan Smith.


Repository: geode


Description
---

Reduced logging level of specific exceptions from warn level to debug

I can collapse the exceptions into a single catch, wasn't sure if we wanted to 
log something different for each type or not...

No longer catching all exceptions when closing the lucene index, instead only 
handling specific ones, we can add more if we see others being thrown...


Diffs
-

  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 0af2719 
  
geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
 44453e4 

Diff: https://reviews.apache.org/r/56719/diff/


Testing
---


Thanks,

Jason Huynh



Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Real Wes
Thinking about this, I think that the “spill”/ “unspill” option may actually be 
the best solution.  If the criteria waffles back and forth along the threshold, 
well, that’s the acceptable worst case.

How’s this?:

1) Create a separate region for the collection key
 - for fat collections that are updated frequently
ADVANTAGE: speed of replication
DISADVANTAGE: constraint on key name

2) Put the collection as an entry value:
   - for small collections and read-only fat collections
ADVANTAGE: no need to create a separate region

We would track the metrics and automatically convert based on a combination of 
frequency of updates and size.

We next define what a fat collection is, such as over nnMB.


On Feb 14, 2017, at 8:12 PM, Jason Huynh 
mailto:jhu...@pivotal.io>> wrote:

The concern about the threshold to spill over would be do you "unspill" over?  
Like what if the collection contracts under the threshold and teeters around 
the threshold.  If the user can configure this size, then wouldn't they just 
know they want a "large" vs a "small?"

I think Swapnil makes a good point that our value add would be that we can 
scale those structures, whereas redis can already do what the "new" 
implementation is doing.



On Tue, Feb 14, 2017 at 4:59 PM Galen M O'Sullivan 
mailto:gosulli...@pivotal.io>> wrote:
If we put them in separate regions, we'll have the overhead of looking up
in two regions added to each and every operation, and the overhead of
creating all these regions.

If we really wanted to we could have some threshold at which we spill
collections over into their own regions, and have something like the best
of both worlds. It's more complex, though, and I don't know how many people
actually use truly huge collections.

On Tue, Feb 14, 2017 at 4:21 PM, Hitesh Khamesra <
hitesh...@yahoo.com.invalid> wrote:

> Jason/Dan: Sorry to hear about that. But both of you have asked the right
> question.
> it depends on your use-case(item 2,3,4,5) . For example "hashes" can be
> use to define key-value pair or java bean. In this case  probably it is
> better to keep that hash at region-entry level.  But if you want to know
> top 10 tweets which are trending then probably you want use
> partition-region for "sorted-set".
>
>
>   From: Jason Huynh mailto:jhu...@pivotal.io>>
>  To: dev@geode.apache.org; 
> "u...@geode.apache.org" 
> mailto:u...@geode.apache.org>>;
> Hitesh Khamesra mailto:hitesh...@yahoo.com>>
>  Sent: Tuesday, February 14, 2017 3:15 PM
>  Subject: Re: GeodeRedisAdapter improvments/feedback
>
> Hi Hitesh,
>
> Not sure about everyone else, but I had a hard time reading this,  however
> I think I figured out what you were describing... the only part I still am
> unsure about is  Feedback/vote: both behaviour is desirable.  Do you mean
> you want feedback and voting on whether both behaviors are desired?  As in
> old implementation and new implementation?
>
> 2,3,4)  The new implementation would mean all the data for a specific data
> structure is contained in a single bucket.  So the individual data
> structures are not quite scalable.  How would you allow scaling of a single
> data structure?
>
> On Tue, Feb 14, 2017 at 3:05 PM Real Wes 
> mailto:thereal...@outlook.com>> wrote:
>
> > In what format do you want the feedback Hitesh?  For now I’ll just
> comment:
> >
> > 1. Redis Type String
> > No comments except that a future Geode value-add would be to extend the
> > Jedis client so that the K/V’s are not compressed. In this way OQL and CQ
> > will work.  The tradeoff of this is that the data cannot be read by a
> > native redis client but for Geode users it’s great. Call the new client
> > Geodis.
> >
> > 2. List/ Hash/ Set/ SortedSet
> > Creating a separate region for each creates a constraint that the keys
> are
> > limited to the characters for region names, which are A-z/0-9/ - and _.
> > Everything else is out. Redis users might start asking questions why
> their
> > list named ++^^/## throws an error. Your suggestion to make it a key
> rather
> > than a region solves this. Furthermore, creating a new region every time
> a
> > new Redis collection is created is going to be slow. I’m not sure why a
> > region was created but I’m sure it made sense to the developer at the
> time.
> >
> > 7. Default Config
> > Can’t we configure a gfsh option to default to the region types we want?
> > Customer A will want PARTITION but Customer B will want
> > PARTITION_REDUNDANT_EXPIRATION_PERSISTENT.  I wonder if we can consider
> a
> > geode> create region —redisType=PARTITION_REDUNDANT_EXPIRATION_
> PERSISTENT
> > that makes _all_ Redis regions of that type?
> >
> >
> >
> > On Feb 14, 2017, at 5:36 PM, Hitesh Khamesra 
> > mailto:hitesh...@yahoo.com>
>  > hitesh...@yahoo.com>> wrote:
> >
> > Current GeodeRedisAdapter implementation is based on
> > https://cwiki.apache.org/conflue

[GitHub] geode-native pull request #12: GEODE-2309: Remove or ignore apache-rat flagg...

2017-02-15 Thread dgkimura
GitHub user dgkimura opened a pull request:

https://github.com/apache/geode-native/pull/12

GEODE-2309: Remove or ignore apache-rat flagged files

`resolv.config` used to work around Solaris SPARC networking issues on a 
local OpenStack instance.  It should not be needed going forward.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dgkimura/geode-native feature/GEODE-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode-native/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 8d29d42869b4df5a83ef83c8e2c8992074b378ef
Author: David Kimura 
Date:   2017-02-15T20:58:33Z

GEODE-2309: Remove or ignore apache-rat flagged files




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2309) Replace or add ASF copyright statements in source.

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868561#comment-15868561
 ] 

ASF GitHub Bot commented on GEODE-2309:
---

GitHub user dgkimura opened a pull request:

https://github.com/apache/geode-native/pull/12

GEODE-2309: Remove or ignore apache-rat flagged files

`resolv.config` used to work around Solaris SPARC networking issues on a 
local OpenStack instance.  It should not be needed going forward.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dgkimura/geode-native feature/GEODE-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode-native/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 8d29d42869b4df5a83ef83c8e2c8992074b378ef
Author: David Kimura 
Date:   2017-02-15T20:58:33Z

GEODE-2309: Remove or ignore apache-rat flagged files




> Replace or add ASF copyright statements in source.
> --
>
> Key: GEODE-2309
> URL: https://issues.apache.org/jira/browse/GEODE-2309
> Project: Geode
>  Issue Type: Task
>  Components: native client
>Reporter: Jacob S. Barrett
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Jason Huynh
With the suggestion from Wes, the constraint on the names would have to
apply for both small and large.  We wouldn't want the thing to explode when
it gets converted...

Is there a way to just make it configurable?  If they know they want a
"large" set, somehow let them specify it.  Otherwise go with the "small"
set?

On Wed, Feb 15, 2017 at 1:01 PM Real Wes  wrote:

> Thinking about this, I think that the “spill”/ “unspill” option may
> actually be the best solution.  If the criteria waffles back and forth
> along the threshold, well, that’s the acceptable worst case.
>
> How’s this?:
>
> 1) Create a separate region for the collection key
>  - for fat collections that are updated frequently
> ADVANTAGE: speed of replication
> DISADVANTAGE: constraint on key name
>
> 2) Put the collection as an entry value:
>- for small collections and read-only fat collections
> ADVANTAGE: no need to create a separate region
>
> We would track the metrics and automatically convert based on a
> combination of frequency of updates and size.
>
> We next define what a fat collection is, such as over nnMB.
>
>
> On Feb 14, 2017, at 8:12 PM, Jason Huynh  jhu...@pivotal.io>> wrote:
>
> The concern about the threshold to spill over would be do you "unspill"
> over?  Like what if the collection contracts under the threshold and
> teeters around the threshold.  If the user can configure this size, then
> wouldn't they just know they want a "large" vs a "small?"
>
> I think Swapnil makes a good point that our value add would be that we can
> scale those structures, whereas redis can already do what the "new"
> implementation is doing.
>
>
>
> On Tue, Feb 14, 2017 at 4:59 PM Galen M O'Sullivan  > wrote:
> If we put them in separate regions, we'll have the overhead of looking up
> in two regions added to each and every operation, and the overhead of
> creating all these regions.
>
> If we really wanted to we could have some threshold at which we spill
> collections over into their own regions, and have something like the best
> of both worlds. It's more complex, though, and I don't know how many people
> actually use truly huge collections.
>
> On Tue, Feb 14, 2017 at 4:21 PM, Hitesh Khamesra <
> hitesh...@yahoo.com.invalid> wrote:
>
> > Jason/Dan: Sorry to hear about that. But both of you have asked the right
> > question.
> > it depends on your use-case(item 2,3,4,5) . For example "hashes" can be
> > use to define key-value pair or java bean. In this case  probably it is
> > better to keep that hash at region-entry level.  But if you want to know
> > top 10 tweets which are trending then probably you want use
> > partition-region for "sorted-set".
> >
> >
> >   From: Jason Huynh mailto:jhu...@pivotal.io>>
> >  To: dev@geode.apache.org; "
> u...@geode.apache.org" <
> u...@geode.apache.org>;
> > Hitesh Khamesra mailto:hitesh...@yahoo.com>>
> >  Sent: Tuesday, February 14, 2017 3:15 PM
> >  Subject: Re: GeodeRedisAdapter improvments/feedback
> >
> > Hi Hitesh,
> >
> > Not sure about everyone else, but I had a hard time reading this,
> however
> > I think I figured out what you were describing... the only part I still
> am
> > unsure about is  Feedback/vote: both behaviour is desirable.  Do you mean
> > you want feedback and voting on whether both behaviors are desired?  As
> in
> > old implementation and new implementation?
> >
> > 2,3,4)  The new implementation would mean all the data for a specific
> data
> > structure is contained in a single bucket.  So the individual data
> > structures are not quite scalable.  How would you allow scaling of a
> single
> > data structure?
> >
> > On Tue, Feb 14, 2017 at 3:05 PM Real Wes  thereal...@outlook.com>> wrote:
> >
> > > In what format do you want the feedback Hitesh?  For now I’ll just
> > comment:
> > >
> > > 1. Redis Type String
> > > No comments except that a future Geode value-add would be to extend the
> > > Jedis client so that the K/V’s are not compressed. In this way OQL and
> CQ
> > > will work.  The tradeoff of this is that the data cannot be read by a
> > > native redis client but for Geode users it’s great. Call the new client
> > > Geodis.
> > >
> > > 2. List/ Hash/ Set/ SortedSet
> > > Creating a separate region for each creates a constraint that the keys
> > are
> > > limited to the characters for region names, which are A-z/0-9/ - and _.
> > > Everything else is out. Redis users might start asking questions why
> > their
> > > list named ++^^/## throws an error. Your suggestion to make it a key
> > rather
> > > than a region solves this. Furthermore, creating a new region every
> time
> > a
> > > new Redis collection is created is going to be slow. I’m not sure why a
> > > region was created but I’m sure it made sense to the developer at the
> > time.
> > >
> > > 7. Default Config
> > > Can’t we configure a g

[GitHub] geode pull request #399: Minor non-functional changes in response to PR comm...

2017-02-15 Thread galen-pivotal
GitHub user galen-pivotal opened a pull request:

https://github.com/apache/geode/pull/399

Minor non-functional changes in response to PR comments.

@bschuchardt 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/galen-pivotal/incubator-geode 
feature/GEODE-2444

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode/pull/399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #399


commit 742f5037a8644314c576577cf8ed8f66b2dafe9c
Author: Galen OSullivan 
Date:   2017-02-15T18:53:13Z

Minor non-functional changes in response to PR comments.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Dan Smith
Doing the spill/unspill option could be pretty tricky to implement, so you
have to do a lot of fancy logic in the transition period. I think Jason's
suggestion of configuring things might make more sense.

-Dan

On Wed, Feb 15, 2017 at 1:12 PM, Jason Huynh  wrote:

> With the suggestion from Wes, the constraint on the names would have to
> apply for both small and large.  We wouldn't want the thing to explode when
> it gets converted...
>
> Is there a way to just make it configurable?  If they know they want a
> "large" set, somehow let them specify it.  Otherwise go with the "small"
> set?
>
> On Wed, Feb 15, 2017 at 1:01 PM Real Wes  wrote:
>
> > Thinking about this, I think that the “spill”/ “unspill” option may
> > actually be the best solution.  If the criteria waffles back and forth
> > along the threshold, well, that’s the acceptable worst case.
> >
> > How’s this?:
> >
> > 1) Create a separate region for the collection key
> >  - for fat collections that are updated frequently
> > ADVANTAGE: speed of replication
> > DISADVANTAGE: constraint on key name
> >
> > 2) Put the collection as an entry value:
> >- for small collections and read-only fat collections
> > ADVANTAGE: no need to create a separate region
> >
> > We would track the metrics and automatically convert based on a
> > combination of frequency of updates and size.
> >
> > We next define what a fat collection is, such as over nnMB.
> >
> >
> > On Feb 14, 2017, at 8:12 PM, Jason Huynh  > jhu...@pivotal.io>> wrote:
> >
> > The concern about the threshold to spill over would be do you "unspill"
> > over?  Like what if the collection contracts under the threshold and
> > teeters around the threshold.  If the user can configure this size, then
> > wouldn't they just know they want a "large" vs a "small?"
> >
> > I think Swapnil makes a good point that our value add would be that we
> can
> > scale those structures, whereas redis can already do what the "new"
> > implementation is doing.
> >
> >
> >
> > On Tue, Feb 14, 2017 at 4:59 PM Galen M O'Sullivan <
> gosulli...@pivotal.io
> > > wrote:
> > If we put them in separate regions, we'll have the overhead of looking up
> > in two regions added to each and every operation, and the overhead of
> > creating all these regions.
> >
> > If we really wanted to we could have some threshold at which we spill
> > collections over into their own regions, and have something like the best
> > of both worlds. It's more complex, though, and I don't know how many
> people
> > actually use truly huge collections.
> >
> > On Tue, Feb 14, 2017 at 4:21 PM, Hitesh Khamesra <
> > hitesh...@yahoo.com.invalid> wrote:
> >
> > > Jason/Dan: Sorry to hear about that. But both of you have asked the
> right
> > > question.
> > > it depends on your use-case(item 2,3,4,5) . For example "hashes" can be
> > > use to define key-value pair or java bean. In this case  probably it is
> > > better to keep that hash at region-entry level.  But if you want to
> know
> > > top 10 tweets which are trending then probably you want use
> > > partition-region for "sorted-set".
> > >
> > >
> > >   From: Jason Huynh mailto:jhu...@pivotal.io>>
> > >  To: dev@geode.apache.org; "
> > u...@geode.apache.org" <
> > u...@geode.apache.org>;
> > > Hitesh Khamesra mailto:hitesh...@yahoo.com>>
> > >  Sent: Tuesday, February 14, 2017 3:15 PM
> > >  Subject: Re: GeodeRedisAdapter improvments/feedback
> > >
> > > Hi Hitesh,
> > >
> > > Not sure about everyone else, but I had a hard time reading this,
> > however
> > > I think I figured out what you were describing... the only part I still
> > am
> > > unsure about is  Feedback/vote: both behaviour is desirable.  Do you
> mean
> > > you want feedback and voting on whether both behaviors are desired?  As
> > in
> > > old implementation and new implementation?
> > >
> > > 2,3,4)  The new implementation would mean all the data for a specific
> > data
> > > structure is contained in a single bucket.  So the individual data
> > > structures are not quite scalable.  How would you allow scaling of a
> > single
> > > data structure?
> > >
> > > On Tue, Feb 14, 2017 at 3:05 PM Real Wes  mailto:
> > thereal...@outlook.com>> wrote:
> > >
> > > > In what format do you want the feedback Hitesh?  For now I’ll just
> > > comment:
> > > >
> > > > 1. Redis Type String
> > > > No comments except that a future Geode value-add would be to extend
> the
> > > > Jedis client so that the K/V’s are not compressed. In this way OQL
> and
> > CQ
> > > > will work.  The tradeoff of this is that the data cannot be read by a
> > > > native redis client but for Geode users it’s great. Call the new
> client
> > > > Geodis.
> > > >
> > > > 2. List/ Hash/ Set/ SortedSet
> > > > Creating a separate region for each creates a constraint that the
> keys
> > > are
> > > > limited to

Re: Review Request 56719: GEODE-2491: Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56719/#review165761
---




geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
 (line 58)


Hi Jason,

I am not sure but should we have an identical catch block for afterPrimary 
too??


- nabarun nag


On Feb. 15, 2017, 8:54 p.m., Jason Huynh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56719/
> ---
> 
> (Updated Feb. 15, 2017, 8:54 p.m.)
> 
> 
> Review request for geode, Lynn Hughes-Godfrey, nabarun nag, and Dan Smith.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Reduced logging level of specific exceptions from warn level to debug
> 
> I can collapse the exceptions into a single catch, wasn't sure if we wanted 
> to log something different for each type or not...
> 
> No longer catching all exceptions when closing the lucene index, instead only 
> handling specific ones, we can add more if we see others being thrown...
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  0af2719 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
>  44453e4 
> 
> Diff: https://reviews.apache.org/r/56719/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jason Huynh
> 
>



Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Real Wes
Does delta propagation make worrying about frequently updated fat collections 
moot?

On Feb 15, 2017, at 4:29 PM, Dan Smith 
mailto:dsm...@pivotal.io>> wrote:

Doing the spill/unspill option could be pretty tricky to implement, so you have 
to do a lot of fancy logic in the transition period. I think Jason's suggestion 
of configuring things might make more sense.

-Dan

On Wed, Feb 15, 2017 at 1:12 PM, Jason Huynh 
mailto:jhu...@pivotal.io>> wrote:
With the suggestion from Wes, the constraint on the names would have to
apply for both small and large.  We wouldn't want the thing to explode when
it gets converted...

Is there a way to just make it configurable?  If they know they want a
"large" set, somehow let them specify it.  Otherwise go with the "small"
set?

On Wed, Feb 15, 2017 at 1:01 PM Real Wes 
mailto:thereal...@outlook.com>> wrote:

> Thinking about this, I think that the “spill”/ “unspill” option may
> actually be the best solution.  If the criteria waffles back and forth
> along the threshold, well, that’s the acceptable worst case.
>
> How’s this?:
>
> 1) Create a separate region for the collection key
>  - for fat collections that are updated frequently
> ADVANTAGE: speed of replication
> DISADVANTAGE: constraint on key name
>
> 2) Put the collection as an entry value:
>- for small collections and read-only fat collections
> ADVANTAGE: no need to create a separate region
>
> We would track the metrics and automatically convert based on a
> combination of frequency of updates and size.
>
> We next define what a fat collection is, such as over nnMB.
>
>
> On Feb 14, 2017, at 8:12 PM, Jason Huynh 
> mailto:jhu...@pivotal.io> jhu...@pivotal.io>> wrote:
>
> The concern about the threshold to spill over would be do you "unspill"
> over?  Like what if the collection contracts under the threshold and
> teeters around the threshold.  If the user can configure this size, then
> wouldn't they just know they want a "large" vs a "small?"
>
> I think Swapnil makes a good point that our value add would be that we can
> scale those structures, whereas redis can already do what the "new"
> implementation is doing.
>
>
>
> On Tue, Feb 14, 2017 at 4:59 PM Galen M O'Sullivan 
> mailto:gosulli...@pivotal.io>
> >> wrote:
> If we put them in separate regions, we'll have the overhead of looking up
> in two regions added to each and every operation, and the overhead of
> creating all these regions.
>
> If we really wanted to we could have some threshold at which we spill
> collections over into their own regions, and have something like the best
> of both worlds. It's more complex, though, and I don't know how many people
> actually use truly huge collections.
>
> On Tue, Feb 14, 2017 at 4:21 PM, Hitesh Khamesra <
> hitesh...@yahoo.com.invalid>>
>  wrote:
>
> > Jason/Dan: Sorry to hear about that. But both of you have asked the right
> > question.
> > it depends on your use-case(item 2,3,4,5) . For example "hashes" can be
> > use to define key-value pair or java bean. In this case  probably it is
> > better to keep that hash at region-entry level.  But if you want to know
> > top 10 tweets which are trending then probably you want use
> > partition-region for "sorted-set".
> >
> >
> >   From: Jason Huynh 
> > mailto:jhu...@pivotal.io>>>
> >  To: 
> > dev@geode.apache.org>;
> >  "
> u...@geode.apache.org>"
>  <
> u...@geode.apache.org>>;
> > Hitesh Khamesra 
> > mailto:hitesh...@yahoo.com>>>
> >  Sent: Tuesday, February 14, 2017 3:15 PM
> >  Subject: Re: GeodeRedisAdapter improvments/feedback
> >
> > Hi Hitesh,
> >
> > Not sure about everyone else, but I had a hard time reading this,
> however
> > I think I figured out what you were describing... the only part I still
> am
> > unsure about is  Feedback/vote: both behaviour is desirable.  Do you mean
> > you want feedback and voting on whether both behaviors are desired?  As
> in
> > old implementation and new implementation?
> >
> > 2,3,4)  The new implementation would mean all the data for a specific
> data
> > structure is contained in a single bucket.  So the individual data
> > structures are not quite scalable.  How would you allow scaling of a
> single
> > data structure?
> >
> > On Tue, Feb 14, 2017 at 3:05 PM Real Wes 
> > mailto:thereal...@outlook.com> thereal...@outlook.com>> wrote:
> >
> > > In what format do you want the feedback Hitesh?  For now I’ll just
> > 

[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101388584
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
---
@@ -1330,21 +1360,12 @@ private void startMemcachedServer() {
   }
 
   private void startRedisServer() {
-int port = system.getConfig().getRedisPort();
-if (port != 0) {
-  String bindAddress = system.getConfig().getRedisBindAddress();
-  assert bindAddress != null;
-  if 
(bindAddress.equals(DistributionConfig.DEFAULT_REDIS_BIND_ADDRESS)) {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_PORT_0,
-new Object[] {port});
-  } else {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_BIND_ADDRESS_0_PORT_1,
-new Object[] {bindAddress, port});
-  }
-  this.redisServer = new GeodeRedisServer(bindAddress, port);
-  this.redisServer.start();
+GeodeRedisService geodeRedisService = 
getService(GeodeRedisService.class);
--- End diff --

@metatype I suppose the exception call is mostly there because it's the 
flag for whether Redis is enabled. I think that eventually we would like to 
have the services be more modular (an "extension framework") and maybe start 
them without even checking which interfaces they implement, but I don't think 
that's fully formed yet.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread galen-pivotal
Github user galen-pivotal commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101388974
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
---
@@ -1330,21 +1360,12 @@ private void startMemcachedServer() {
   }
 
   private void startRedisServer() {
-int port = system.getConfig().getRedisPort();
-if (port != 0) {
-  String bindAddress = system.getConfig().getRedisBindAddress();
-  assert bindAddress != null;
-  if 
(bindAddress.equals(DistributionConfig.DEFAULT_REDIS_BIND_ADDRESS)) {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_PORT_0,
-new Object[] {port});
-  } else {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_BIND_ADDRESS_0_PORT_1,
-new Object[] {bindAddress, port});
-  }
-  this.redisServer = new GeodeRedisServer(bindAddress, port);
-  this.redisServer.start();
+GeodeRedisService geodeRedisService = 
getService(GeodeRedisService.class);
--- End diff --

Also, whatever code gfsh uses to start the Redis adapter will have to stay 
in gfsh, unless there is a way to register extension commands with gfsh through 
some not-yet-existent method. This would be great, though, because we could 
plug in extensions that would change Geode's functionality.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Review Request 56719: GEODE-2491: Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread Jason Huynh


> On Feb. 15, 2017, 9:33 p.m., nabarun nag wrote:
> > geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java,
> >  line 58
> > 
> >
> > Hi Jason,
> > 
> > I am not sure but should we have an identical catch block for 
> > afterPrimary too??

This one I am not sure about.  I think we could but it might be helpful to know 
why we didn't create a bucket when we turned to primary.


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56719/#review165761
---


On Feb. 15, 2017, 8:54 p.m., Jason Huynh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56719/
> ---
> 
> (Updated Feb. 15, 2017, 8:54 p.m.)
> 
> 
> Review request for geode, Lynn Hughes-Godfrey, nabarun nag, and Dan Smith.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Reduced logging level of specific exceptions from warn level to debug
> 
> I can collapse the exceptions into a single catch, wasn't sure if we wanted 
> to log something different for each type or not...
> 
> No longer catching all exceptions when closing the lucene index, instead only 
> handling specific ones, we can add more if we see others being thrown...
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  0af2719 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
>  44453e4 
> 
> Diff: https://reviews.apache.org/r/56719/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jason Huynh
> 
>



[GitHub] geode-native pull request #13: GEODE-2476: Replace gfcpp with geode.

2017-02-15 Thread PivotalSarge
GitHub user PivotalSarge opened a pull request:

https://github.com/apache/geode-native/pull/13

GEODE-2476: Replace gfcpp with geode.

- Rename directories and files with gfcpp into their name to
  instead use geode and update all references thereto.
- Rename the gfcpp executable to apache-geode-getversion and
  modify it to print the version even in the absence of
  command-line arguments.
- Rename gfcpp.properties to geode.properties and update all
  references thereto.
- Ensure formatting style guide compliance.
- Re-applying fixes for Windows compilation errors.
- Fix logic for using clang tidy auto-fix.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PivotalSarge/geode-native feature/GEODE-2476

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode-native/pull/13.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13


commit 56e21e5e89bed28416237c6d752d60f9d894e9b7
Author: Sarge 
Date:   2017-02-14T18:31:51Z

GEODE-2476: Replace gfcpp with geode.

- Rename directories and files with gfcpp into their name to
  instead use geode and update all references thereto.
- Rename the gfcpp executable to apache-geode-getversion and
  modify it to print the version even in the absence of
  command-line arguments.
- Rename gfcpp.properties to geode.properties and update all
  references thereto.
- Ensure formatting style guide compliance.
- Re-applying fixes for Windows compilation errors.
- Fix logic for using clang tidy auto-fix.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (GEODE-2476) Replace gfcpp with geode

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15868619#comment-15868619
 ] 

ASF GitHub Bot commented on GEODE-2476:
---

GitHub user PivotalSarge opened a pull request:

https://github.com/apache/geode-native/pull/13

GEODE-2476: Replace gfcpp with geode.

- Rename directories and files with gfcpp into their name to
  instead use geode and update all references thereto.
- Rename the gfcpp executable to apache-geode-getversion and
  modify it to print the version even in the absence of
  command-line arguments.
- Rename gfcpp.properties to geode.properties and update all
  references thereto.
- Ensure formatting style guide compliance.
- Re-applying fixes for Windows compilation errors.
- Fix logic for using clang tidy auto-fix.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PivotalSarge/geode-native feature/GEODE-2476

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/geode-native/pull/13.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13


commit 56e21e5e89bed28416237c6d752d60f9d894e9b7
Author: Sarge 
Date:   2017-02-14T18:31:51Z

GEODE-2476: Replace gfcpp with geode.

- Rename directories and files with gfcpp into their name to
  instead use geode and update all references thereto.
- Rename the gfcpp executable to apache-geode-getversion and
  modify it to print the version even in the absence of
  command-line arguments.
- Rename gfcpp.properties to geode.properties and update all
  references thereto.
- Ensure formatting style guide compliance.
- Re-applying fixes for Windows compilation errors.
- Fix logic for using clang tidy auto-fix.




> Replace gfcpp with geode
> 
>
> Key: GEODE-2476
> URL: https://issues.apache.org/jira/browse/GEODE-2476
> Project: Geode
>  Issue Type: Improvement
>  Components: native client
>Reporter: Michael Dodge
>Assignee: Michael Dodge
>
> The substring "gfcpp" still occurs in some places in the native client 
> codebase. It ought to be replaced with "geode" or "geode-native", whichever 
> makes more sense on a case-by-case basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] JIRA guidelines

2017-02-15 Thread Dan Smith
The draft bit has been removed. Thanks to William for writing these in the
first place!

-Dan

On Wed, Feb 15, 2017 at 8:16 AM, Michael William Dodge 
wrote:

> +1
>
> > On 14 Feb, 2017, at 20:50, William Markito Oliveira <
> william.mark...@gmail.com> wrote:
> >
> > +1
> >
> > Finally!! ;)
> >
> > Sent from my iPhone
> >
> >> On Feb 14, 2017, at 7:59 PM, Galen M O'Sullivan 
> wrote:
> >>
> >> +1 to the article and removing the draft label
> >>
> >>> On Tue, Feb 14, 2017 at 4:05 PM, Akihiro Kitada 
> wrote:
> >>>
> >>> I agree!
> >>>
> >>>
> >>> --
> >>> Akihiro Kitada  |  Staff Customer Engineer |  +81 80 3716 3736
> >>> Support.Pivotal.io   |  Mon-Fri  9:00am to
> >>> 5:30pm JST  |  1-877-477-2269
> >>> [image: support]  [image: twitter]
> >>>  [image: linkedin]
> >>>  [image: facebook]
> >>>  [image: google plus]
> >>>  [image: youtube]
> >>>  eSPScpj2J50ErtzR9ANSzv3kl>
> >>>
> >>>
> >>> 2017-02-15 8:47 GMT+09:00 Dan Smith :
> >>>
>  We have this draft of JIRA guidelines sitting on the wiki. I updated
> it
>  slightly. Can we agree on these guidelines and remove the draft
> label? Is
>  there more that needs to be here?
> 
>  https://cwiki.apache.org/confluence/pages/viewpage.
> >>> action?pageId=57311462
> 
>  -Dan
> 
> >>>
>
>


Re: Review Request 56719: GEODE-2491: Reduce logging of handled exceptions in LuceneEventListener and LuceneBucketListeners

2017-02-15 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56719/#review165766
---


Ship it!




Ship It!

- nabarun nag


On Feb. 15, 2017, 8:54 p.m., Jason Huynh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56719/
> ---
> 
> (Updated Feb. 15, 2017, 8:54 p.m.)
> 
> 
> Review request for geode, Lynn Hughes-Godfrey, nabarun nag, and Dan Smith.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Reduced logging level of specific exceptions from warn level to debug
> 
> I can collapse the exceptions into a single catch, wasn't sure if we wanted 
> to log something different for each type or not...
> 
> No longer catching all exceptions when closing the lucene index, instead only 
> handling specific ones, we can add more if we see others being thrown...
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneBucketListener.java
>  0af2719 
>   
> geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneEventListener.java
>  44453e4 
> 
> Diff: https://reviews.apache.org/r/56719/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jason Huynh
> 
>



[GitHub] geode pull request #398: Split the redis adapter into its own package

2017-02-15 Thread metatype
Github user metatype commented on a diff in the pull request:

https://github.com/apache/geode/pull/398#discussion_r101391406
  
--- Diff: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java 
---
@@ -1330,21 +1360,12 @@ private void startMemcachedServer() {
   }
 
   private void startRedisServer() {
-int port = system.getConfig().getRedisPort();
-if (port != 0) {
-  String bindAddress = system.getConfig().getRedisBindAddress();
-  assert bindAddress != null;
-  if 
(bindAddress.equals(DistributionConfig.DEFAULT_REDIS_BIND_ADDRESS)) {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_PORT_0,
-new Object[] {port});
-  } else {
-getLoggerI18n().info(
-
LocalizedStrings.GemFireCacheImpl_STARTING_GEMFIRE_REDIS_SERVER_ON_BIND_ADDRESS_0_PORT_1,
-new Object[] {bindAddress, port});
-  }
-  this.redisServer = new GeodeRedisServer(bindAddress, port);
-  this.redisServer.start();
+GeodeRedisService geodeRedisService = 
getService(GeodeRedisService.class);
--- End diff --

Good news!  There *is* a way to start services and do gfsh extensions.  
Take a look at the lucene module.  The only bits that are in `geode-core` are 
for serialization and string messages--which we don't have a good solution for 
as of yet (contributions welcome!).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >