[jira] [Commented] (GEODE-9546) Enable Redis Server to Authenticate Using SecurityManager

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415671#comment-17415671
 ] 

ASF subversion and git services commented on GEODE-9546:


Commit bca47554d51269d1672a5726b92bd83429f02177 in geode's branch 
refs/heads/develop from Jens Deppe
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=bca4755 ]

Revert "Revert "GEODE-9546: Integrate Security Manager into Radish AUTH flow 
(#6844)" (#6864)" (#6868)

This reverts commit b97370facd0b02307a3562e312fbc08945a86956.

> Enable Redis Server to Authenticate Using SecurityManager
> -
>
> Key: GEODE-9546
> URL: https://issues.apache.org/jira/browse/GEODE-9546
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Priority: Major
>  Labels: pull-request-available, redis
>
> The Redis [AUTH|https://redis.io/commands/auth] command must be integrated 
> with the Geode SecurityManager.
>  # Remove the Geode property compatible-with-redis-password that currently 
> being used for the Redis password.
>  # Add a new geode property for the Redis default user ID, 
> compatible-with-redis-username
>  # When a user issues an AUTH Command, the server must call the authenticate 
> method on the customer's SecurityManager with the user (security-username 
> property) and the user provided password (security-password property) and 
> properly handle the AuthenticationFailedException. If the AUTH command was 
> called without a user the value of compatible-with-redis-user should be used**
>  #  The Object/Principal returned from a successful authenticate method call 
> must be cached, associated with the client connection, and available for 
> reuse in subsequent authorization calls.
> **When the AUTH command has a single argument (e.g. AUTH xx) the single 
> argument is interpreted as a password/token and the default Redis user is 
> used for authentication.  When the AUTH command has two arguments (e.g. AUTH 
> xx yy) the first argument is interpreted as a username and is used 
> instead of the default Redis user.  The second argument is interpreted as a 
> password.
>  +Acceptance Criteria+
>  
> When a SecurityManager is configured, Redis clients that don't AUTH with a 
> valid password cannot perform operations. Redis clients that do AUTH with a 
> valid password can perform Redis operations.  Until we support ACLS, use of 
> the AUTH command with more than two arguments is invalid syntax.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-9583) CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNoDataSerializer fails with NotSerializableException

2021-09-15 Thread Jianxia Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianxia Chen resolved GEODE-9583.
-
Resolution: Fixed

> CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer fails with NotSerializableException
> ---
>
> Key: GEODE-9583
> URL: https://issues.apache.org/jira/browse/GEODE-9583
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Bill Burcham
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
>
> This test run 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1549
>  failed with:
> {code:java}
> org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> ...
> Found suspect string in 'dunit_suspect-vm2.log' at line 493
> [error 2021/09/04 11:19:51.404 UTC  DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer> 
> tid=1025] A DiskAccessException has occurred while writing to the disk for 
> disk store 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer. The 
> cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore: 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer: Fatal 
> error from asynchronous flusher thread, caused by 
> org.apache.geode.SerializationException: An IOException was thrown while 
> serializing.
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1796)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1710)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.SerializationException: An IOException was 
> thrown while serializing.
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2107)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2090)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapper(DiskEntry.java:755)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapperFromEntry(DiskEntry.java:791)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:809)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:799)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1465)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1417)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1752)
>   ... 2 more
> Caused by: java.io.NotSerializableException: 
> org.apache.geode.cache30.MultiVMRegionTestCase$LongWrapper
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>   at 
> org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2187)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2056)
>   at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2839)
>   at 
> org.apache.geode.internal.util.BlobHelper.serializeToBlob(BlobHelper.java:54)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2105)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9583) CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNoDataSerializer fails with NotSerializableException

2021-09-15 Thread Jianxia Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianxia Chen updated GEODE-9583:

Fix Version/s: 1.15.0

> CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer fails with NotSerializableException
> ---
>
> Key: GEODE-9583
> URL: https://issues.apache.org/jira/browse/GEODE-9583
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Bill Burcham
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.15.0
>
>
> This test run 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1549
>  failed with:
> {code:java}
> org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> ...
> Found suspect string in 'dunit_suspect-vm2.log' at line 493
> [error 2021/09/04 11:19:51.404 UTC  DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer> 
> tid=1025] A DiskAccessException has occurred while writing to the disk for 
> disk store 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer. The 
> cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore: 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer: Fatal 
> error from asynchronous flusher thread, caused by 
> org.apache.geode.SerializationException: An IOException was thrown while 
> serializing.
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1796)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1710)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.SerializationException: An IOException was 
> thrown while serializing.
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2107)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2090)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapper(DiskEntry.java:755)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapperFromEntry(DiskEntry.java:791)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:809)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:799)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1465)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1417)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1752)
>   ... 2 more
> Caused by: java.io.NotSerializableException: 
> org.apache.geode.cache30.MultiVMRegionTestCase$LongWrapper
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>   at 
> org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2187)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2056)
>   at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2839)
>   at 
> org.apache.geode.internal.util.BlobHelper.serializeToBlob(BlobHelper.java:54)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2105)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9583) CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNoDataSerializer fails with NotSerializableException

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415672#comment-17415672
 ] 

ASF subversion and git services commented on GEODE-9583:


Commit 0a579bde6777cfd75654a59527b5ea3dc37af550 in geode's branch 
refs/heads/develop from Jianxia Chen
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=0a579bd ]

GEODE-9583: Flush the disk stores before unregistering all serializers (#6854)



> CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer fails with NotSerializableException
> ---
>
> Key: GEODE-9583
> URL: https://issues.apache.org/jira/browse/GEODE-9583
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Bill Burcham
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.15.0
>
>
> This test run 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1549
>  failed with:
> {code:java}
> org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> ...
> Found suspect string in 'dunit_suspect-vm2.log' at line 493
> [error 2021/09/04 11:19:51.404 UTC  DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer> 
> tid=1025] A DiskAccessException has occurred while writing to the disk for 
> disk store 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer. The 
> cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore: 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer: Fatal 
> error from asynchronous flusher thread, caused by 
> org.apache.geode.SerializationException: An IOException was thrown while 
> serializing.
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1796)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1710)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.SerializationException: An IOException was 
> thrown while serializing.
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2107)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2090)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapper(DiskEntry.java:755)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapperFromEntry(DiskEntry.java:791)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:809)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:799)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1465)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1417)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1752)
>   ... 2 more
> Caused by: java.io.NotSerializableException: 
> org.apache.geode.cache30.MultiVMRegionTestCase$LongWrapper
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>   at 
> org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2187)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2056)
>   at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2839)
>   at 
> org.apache.geode.internal.util.BlobHelper.serializeToBlob(BlobHelper.java:54)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2105)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9583) CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNoDataSerializer fails with NotSerializableException

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415675#comment-17415675
 ] 

ASF subversion and git services commented on GEODE-9583:


Commit eeed28b7563fe702fcf1588b668299bc71b23eec in geode's branch 
refs/heads/support/1.14 from Jianxia Chen
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=eeed28b ]

GEODE-9583: Flush the disk stores before unregistering all serializers (#6854)

(cherry picked from commit 0a579bde6777cfd75654a59527b5ea3dc37af550)


> CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer fails with NotSerializableException
> ---
>
> Key: GEODE-9583
> URL: https://issues.apache.org/jira/browse/GEODE-9583
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Bill Burcham
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.14.1, 1.15.0
>
>
> This test run 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1549
>  failed with:
> {code:java}
> org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> ...
> Found suspect string in 'dunit_suspect-vm2.log' at line 493
> [error 2021/09/04 11:19:51.404 UTC  DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer> 
> tid=1025] A DiskAccessException has occurred while writing to the disk for 
> disk store 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer. The 
> cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore: 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer: Fatal 
> error from asynchronous flusher thread, caused by 
> org.apache.geode.SerializationException: An IOException was thrown while 
> serializing.
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1796)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1710)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.SerializationException: An IOException was 
> thrown while serializing.
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2107)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2090)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapper(DiskEntry.java:755)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapperFromEntry(DiskEntry.java:791)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:809)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:799)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1465)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1417)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1752)
>   ... 2 more
> Caused by: java.io.NotSerializableException: 
> org.apache.geode.cache30.MultiVMRegionTestCase$LongWrapper
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>   at 
> org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2187)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2056)
>   at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2839)
>   at 
> org.apache.geode.internal.util.BlobHelper.serializeToBlob(BlobHelper.java:54)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2105)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9583) CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > testNoDataSerializer fails with NotSerializableException

2021-09-15 Thread Jianxia Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianxia Chen updated GEODE-9583:

Fix Version/s: 1.14.1

> CI Failure: DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer fails with NotSerializableException
> ---
>
> Key: GEODE-9583
> URL: https://issues.apache.org/jira/browse/GEODE-9583
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Bill Burcham
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeOperationAPI, pull-request-available
> Fix For: 1.14.1, 1.15.0
>
>
> This test run 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1549
>  failed with:
> {code:java}
> org.apache.geode.cache30.DiskDistributedNoAckAsyncOverflowRegionDUnitTest > 
> testNoDataSerializer FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> ...
> Found suspect string in 'dunit_suspect-vm2.log' at line 493
> [error 2021/09/04 11:19:51.404 UTC  DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer> 
> tid=1025] A DiskAccessException has occurred while writing to the disk for 
> disk store 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer. The 
> cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore: 
> DiskDistributedNoAckAsyncOverflowRegionDUnitTest_testNoDataSerializer: Fatal 
> error from asynchronous flusher thread, caused by 
> org.apache.geode.SerializationException: An IOException was thrown while 
> serializing.
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1796)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1710)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.SerializationException: An IOException was 
> thrown while serializing.
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2107)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2090)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapper(DiskEntry.java:755)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.createValueWrapperFromEntry(DiskEntry.java:791)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:809)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeToDisk(DiskEntry.java:799)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1465)
>   at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1417)
>   at 
> org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.doAsyncFlush(DiskStoreImpl.java:1752)
>   ... 2 more
> Caused by: java.io.NotSerializableException: 
> org.apache.geode.cache30.MultiVMRegionTestCase$LongWrapper
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>   at 
> org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2187)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2056)
>   at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2839)
>   at 
> org.apache.geode.internal.util.BlobHelper.serializeToBlob(BlobHelper.java:54)
>   at 
> org.apache.geode.internal.cache.EntryEventImpl.serialize(EntryEventImpl.java:2105)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-9604) Fix test in MSetDUnitTest

2021-09-15 Thread Jens Deppe (Jira)
Jens Deppe created GEODE-9604:
-

 Summary: Fix test in MSetDUnitTest
 Key: GEODE-9604
 URL: https://issues.apache.org/jira/browse/GEODE-9604
 Project: Geode
  Issue Type: Test
  Components: redis
Reporter: Jens Deppe


The test {{testMSet_crashDoesNotLeaveInconsistencies}} does not pass under 
stress and will occasionally fail. This may be due to a product issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-9575) redis publish sends an extra message to each server

2021-09-15 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-9575:
---

Assignee: Darrel Schneider

> redis publish sends an extra message to each server
> ---
>
> Key: GEODE-9575
> URL: https://issues.apache.org/jira/browse/GEODE-9575
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> The redis publish command uses a geode function to distribute the publish to 
> each server that may have subscriptions. When it does this it calls 
> PartitionRegionHelper.getPartitionRegionInfo. It turns out the implementation 
> of this method sends a message to each data store: 
> FetchPartitionDetailsMessage.
> Since all redis publish wanted to do was execute the function on each server 
> that has the redis data region, it could have instead done it like so in 
> RegionProvider.getRegionMembers:
> {code:java}
> Set otherMembers =
>  partitionedRegion.getRegionAdvisor().adviseDataStore();
>  Set result = new HashSet<>(otherMembers.size()+1);
>  result.addAll(otherMembers);
>  
> result.add(partitionedRegion.getDistributionManager().getDistributionManagerId());
>  return result;
> {code}
> When I did this I started seeing one of the tests fail. It looked like it 
> might had to do with something left around from one test method interfering 
> with another. It is possible that the older slow code works in this test 
> because the extra messaging slows publish down. It was a dunit test that did 
> some ha (starting and stopping servers). I verified that this code does 
> produce the same set of members as the old code but I did not figure out what 
> was wrong with the test that started failing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9522) When a server is force disconnected, it should set shutdown cause for dm to prevent clients recreating server connection.

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415776#comment-17415776
 ] 

ASF subversion and git services commented on GEODE-9522:


Commit cc0167799a21fe48444314009af5b53acc940661 in geode's branch 
refs/heads/feature/GEODE-9522 from zhouxh
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=cc01677 ]

GEODE-9522: setShutdownCause with ForceDisconnection as cause at the beginning 
of forceDisconnect()


> When a server is force disconnected, it should set shutdown cause for dm to 
> prevent clients recreating server connection.
> -
>
> Key: GEODE-9522
> URL: https://issues.apache.org/jira/browse/GEODE-9522
> Project: Geode
>  Issue Type: Bug
>Reporter: Xiaojian Zhou
>Priority: Major
>  Labels: pull-request-available
>
> When a client is doing puts (mainly creates) to servers with replicated 
> region, shutdown some servers to force switching of primary HARegionQueue, 
> sometimes, the event with later event id is distributed by previous primary 
> HARegionQueue, which caused the events with earlier event ids are rejected by 
> clients. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9519) Implement ZSCAN Command

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415784#comment-17415784
 ] 

ASF subversion and git services commented on GEODE-9519:


Commit d58df400b9bacda3fb3e856b5c2e87c5df840799 in geode's branch 
refs/heads/develop from Donal Evans
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=d58df40 ]

GEODE-9519: Implement Radish ZSCAN command (#6831)

 - Change implementation to return integer cursor instead of BigInteger
 as there was no possible way for the returned value for cursor to be
 greater than Integer.MAX_VALUE
 - Move scan execution logic to AbstractScanExecutor to allow
 inheritance
 - Do not attempt to handle CURSOR values greater than Long.MAX_VALUE
for HSCAN and ZSCAN
 - Use scan count argument directly rather than looping in RedisSortedSet
 and RedisHash
 - Minor refactor of how scan empty results are returned
 - Add ZSCAN to list of supported commands
 - Add tests for ZSCAN

Authored-by: Donal Evans 

> Implement ZSCAN Command
> ---
>
> Key: GEODE-9519
> URL: https://issues.apache.org/jira/browse/GEODE-9519
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Implement the [ZSCAN|https://redis.io/commands/zscan] command with associated 
> options, and tests.
>  
> +Acceptance Criteria+
>  
> The command has been implemented along with appropriate unit tests.
>  
> The  command has been added to the AbstractHitsMissesIntegrationTest.  The 
> command has been tested using the redis-cli tool and verified against native 
> redis.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-9427) Radish HSCAN implementation does not accept values for CURSOR argument that match Redis

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans reassigned GEODE-9427:
--

Assignee: Donal Evans

> Radish HSCAN implementation does not accept values for CURSOR argument that 
> match Redis
> ---
>
> Key: GEODE-9427
> URL: https://issues.apache.org/jira/browse/GEODE-9427
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>
> The HSCAN command takes an argument, CURSOR, which in native Redis can be any 
> value between -18446744073709551615 and 18446744073709551615 (the maximum 
> value of an unsigned long). The Radish implementation of HSCAN only accepts 
> values in the range {{Integer.MIN_VALUE}} -> {{Integer.MAX_VALUE}} and 
> returns an error if values outside this range are used.
> The Radish HSCAN implementation should be modified to accept the same range 
> of values as Redis. Examples of this can be found in the implementations of 
> the currently unsupported SCAN and SSCAN commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-9519) Implement ZSCAN Command

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans resolved GEODE-9519.

Fix Version/s: 1.15.0
   Resolution: Fixed

> Implement ZSCAN Command
> ---
>
> Key: GEODE-9519
> URL: https://issues.apache.org/jira/browse/GEODE-9519
> Project: Geode
>  Issue Type: New Feature
>  Components: redis
>Reporter: Wayne
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Implement the [ZSCAN|https://redis.io/commands/zscan] command with associated 
> options, and tests.
>  
> +Acceptance Criteria+
>  
> The command has been implemented along with appropriate unit tests.
>  
> The  command has been added to the AbstractHitsMissesIntegrationTest.  The 
> command has been tested using the redis-cli tool and verified against native 
> redis.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-9427) Radish HSCAN implementation does not accept values for CURSOR argument that match Redis

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans resolved GEODE-9427.

Fix Version/s: 1.15.0
   Resolution: Fixed

The Redis documentation states that *SCAN behaviour is undefined for CURSOR 
values other than 0 or the value returned by the previous *SCAN command. Given 
that it is not possible for a *SCAN command to return a value greater than 
Integer.MAX_VALUE, any value greater than that is not a valid CURSOR and so the 
behaviour of the command is undefined.

For this reason, it was decided that returning an error for CURSOR values 
greater than Long.MAX_VALUE was an acceptable compromise that prevented the 
need to create BigInteger objects that are only used for corner-case input 
validation while still allowing behaviour that's the same as native Redis for 
all valid inputs.

> Radish HSCAN implementation does not accept values for CURSOR argument that 
> match Redis
> ---
>
> Key: GEODE-9427
> URL: https://issues.apache.org/jira/browse/GEODE-9427
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
> Fix For: 1.15.0
>
>
> The HSCAN command takes an argument, CURSOR, which in native Redis can be any 
> value between -18446744073709551615 and 18446744073709551615 (the maximum 
> value of an unsigned long). The Radish implementation of HSCAN only accepts 
> values in the range {{Integer.MIN_VALUE}} -> {{Integer.MAX_VALUE}} and 
> returns an error if values outside this range are used.
> The Radish HSCAN implementation should be modified to accept the same range 
> of values as Redis. Examples of this can be found in the implementations of 
> the currently unsupported SCAN and SSCAN commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-9427) Radish HSCAN implementation does not accept values for CURSOR argument that match Redis

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans reopened GEODE-9427:


> Radish HSCAN implementation does not accept values for CURSOR argument that 
> match Redis
> ---
>
> Key: GEODE-9427
> URL: https://issues.apache.org/jira/browse/GEODE-9427
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
> Fix For: 1.15.0
>
>
> The HSCAN command takes an argument, CURSOR, which in native Redis can be any 
> value between -18446744073709551615 and 18446744073709551615 (the maximum 
> value of an unsigned long). The Radish implementation of HSCAN only accepts 
> values in the range {{Integer.MIN_VALUE}} -> {{Integer.MAX_VALUE}} and 
> returns an error if values outside this range are used.
> The Radish HSCAN implementation should be modified to accept the same range 
> of values as Redis. Examples of this can be found in the implementations of 
> the currently unsupported SCAN and SSCAN commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-9427) Radish HSCAN implementation does not accept values for CURSOR argument that match Redis

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans resolved GEODE-9427.

Resolution: Won't Fix

> Radish HSCAN implementation does not accept values for CURSOR argument that 
> match Redis
> ---
>
> Key: GEODE-9427
> URL: https://issues.apache.org/jira/browse/GEODE-9427
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
> Fix For: 1.15.0
>
>
> The HSCAN command takes an argument, CURSOR, which in native Redis can be any 
> value between -18446744073709551615 and 18446744073709551615 (the maximum 
> value of an unsigned long). The Radish implementation of HSCAN only accepts 
> values in the range {{Integer.MIN_VALUE}} -> {{Integer.MAX_VALUE}} and 
> returns an error if values outside this range are used.
> The Radish HSCAN implementation should be modified to accept the same range 
> of values as Redis. Examples of this can be found in the implementations of 
> the currently unsupported SCAN and SSCAN commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-9605) Using hard-coded character literals in Redis module is fine

2021-09-15 Thread Donal Evans (Jira)
Donal Evans created GEODE-9605:
--

 Summary: Using hard-coded character literals in Redis module is 
fine
 Key: GEODE-9605
 URL: https://issues.apache.org/jira/browse/GEODE-9605
 Project: Geode
  Issue Type: Improvement
  Components: redis
Affects Versions: 1.15.0
Reporter: Donal Evans


A comment in the StringBytesGlossary class (formerly in the Coder class) states:

{noformat}
/**
 * Important note
 * 
 * Do not use '' <-- java primitive chars. Redis uses \{@link Coder#CHARSET} 
encoding so we should
 * not risk java handling char to byte conversions, rather just hard code 
\{@link Coder#CHARSET}
 * chars as bytes
 */
{noformat}

which has led to many single-byte constants being introduced in the 
StringBytesGlossary class for use in comparisons. However, since these 
primitives are handled at compile time and the compiler always uses UTF-16, 
there is no need to work around any platform-specific character set issues. To 
simplify the code, the existing character constants should be inlined and 
removed from the StringBytesGlossary class, along with the above comment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-9605) Using hard-coded character literals in Redis module is fine

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans reassigned GEODE-9605:
--

Assignee: Donal Evans

> Using hard-coded character literals in Redis module is fine
> ---
>
> Key: GEODE-9605
> URL: https://issues.apache.org/jira/browse/GEODE-9605
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>
> A comment in the StringBytesGlossary class (formerly in the Coder class) 
> states:
> {noformat}
> /**
>  * Important note
>  * 
>  * Do not use '' <-- java primitive chars. Redis uses \{@link Coder#CHARSET} 
> encoding so we should
>  * not risk java handling char to byte conversions, rather just hard code 
> \{@link Coder#CHARSET}
>  * chars as bytes
>  */
> {noformat}
> which has led to many single-byte constants being introduced in the 
> StringBytesGlossary class for use in comparisons. However, since these 
> primitives are handled at compile time and the compiler always uses UTF-16, 
> there is no need to work around any platform-specific character set issues. 
> To simplify the code, the existing character constants should be inlined and 
> removed from the StringBytesGlossary class, along with the above comment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9605) Using hard-coded character literals in Redis module is fine

2021-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-9605:
--
Labels: pull-request-available  (was: )

> Using hard-coded character literals in Redis module is fine
> ---
>
> Key: GEODE-9605
> URL: https://issues.apache.org/jira/browse/GEODE-9605
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
>
> A comment in the StringBytesGlossary class (formerly in the Coder class) 
> states:
> {noformat}
> /**
>  * Important note
>  * 
>  * Do not use '' <-- java primitive chars. Redis uses \{@link Coder#CHARSET} 
> encoding so we should
>  * not risk java handling char to byte conversions, rather just hard code 
> \{@link Coder#CHARSET}
>  * chars as bytes
>  */
> {noformat}
> which has led to many single-byte constants being introduced in the 
> StringBytesGlossary class for use in comparisons. However, since these 
> primitives are handled at compile time and the compiler always uses UTF-16, 
> there is no need to work around any platform-specific character set issues. 
> To simplify the code, the existing character constants should be inlined and 
> removed from the StringBytesGlossary class, along with the above comment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9575) redis publish sends an extra message to each server

2021-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-9575:
--
Labels: pull-request-available  (was: )

> redis publish sends an extra message to each server
> ---
>
> Key: GEODE-9575
> URL: https://issues.apache.org/jira/browse/GEODE-9575
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
>
> The redis publish command uses a geode function to distribute the publish to 
> each server that may have subscriptions. When it does this it calls 
> PartitionRegionHelper.getPartitionRegionInfo. It turns out the implementation 
> of this method sends a message to each data store: 
> FetchPartitionDetailsMessage.
> Since all redis publish wanted to do was execute the function on each server 
> that has the redis data region, it could have instead done it like so in 
> RegionProvider.getRegionMembers:
> {code:java}
> Set otherMembers =
>  partitionedRegion.getRegionAdvisor().adviseDataStore();
>  Set result = new HashSet<>(otherMembers.size()+1);
>  result.addAll(otherMembers);
>  
> result.add(partitionedRegion.getDistributionManager().getDistributionManagerId());
>  return result;
> {code}
> When I did this I started seeing one of the tests fail. It looked like it 
> might had to do with something left around from one test method interfering 
> with another. It is possible that the older slow code works in this test 
> because the extra messaging slows publish down. It was a dunit test that did 
> some ha (starting and stopping servers). I verified that this code does 
> produce the same set of members as the old code but I did not figure out what 
> was wrong with the test that started failing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-9552) ForcedDisconnectException is not handled in ExecutionHandlerContext

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans reassigned GEODE-9552:
--

Assignee: Donal Evans

> ForcedDisconnectException is not handled in ExecutionHandlerContext
> ---
>
> Key: GEODE-9552
> URL: https://issues.apache.org/jira/browse/GEODE-9552
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
>
> As part of the changes in GEODE-9303, handling of various exceptions related 
> to function execution was removed from the 
> {{ExecutionHandlerContext.getExceptionResponse()}} method. However, 
> {{ForcedDisconnectException}} can still occur due to e.g. network partition, 
> and so handling for this exception should be restored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-9552) ForcedDisconnectException is not handled in ExecutionHandlerContext

2021-09-15 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans resolved GEODE-9552.

Fix Version/s: 1.15.0
   Resolution: Fixed

> ForcedDisconnectException is not handled in ExecutionHandlerContext
> ---
>
> Key: GEODE-9552
> URL: https://issues.apache.org/jira/browse/GEODE-9552
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> As part of the changes in GEODE-9303, handling of various exceptions related 
> to function execution was removed from the 
> {{ExecutionHandlerContext.getExceptionResponse()}} method. However, 
> {{ForcedDisconnectException}} can still occur due to e.g. network partition, 
> and so handling for this exception should be restored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9552) ForcedDisconnectException is not handled in ExecutionHandlerContext

2021-09-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415826#comment-17415826
 ] 

ASF subversion and git services commented on GEODE-9552:


Commit d0113fc2eb3a2eedfa1464aab733c7af148d4539 in geode's branch 
refs/heads/develop from Donal Evans
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=d0113fc ]

GEODE-9552: Change how Radish server shutdown/disconnect is handled (#6871)

- Close client connection on CancelException
 - Fix handling of RedisCommandParserException
 - Remove unused constants

Authored-by: Donal Evans 

> ForcedDisconnectException is not handled in ExecutionHandlerContext
> ---
>
> Key: GEODE-9552
> URL: https://issues.apache.org/jira/browse/GEODE-9552
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Affects Versions: 1.15.0
>Reporter: Donal Evans
>Assignee: Donal Evans
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> As part of the changes in GEODE-9303, handling of various exceptions related 
> to function execution was removed from the 
> {{ExecutionHandlerContext.getExceptionResponse()}} method. However, 
> {{ForcedDisconnectException}} can still occur due to e.g. network partition, 
> and so handling for this exception should be restored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9582) redis glob pattern should never throw PatternSyntaxException

2021-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-9582:
--
Labels: pull-request-available  (was: )

> redis glob pattern should never throw PatternSyntaxException
> 
>
> Key: GEODE-9582
> URL: https://issues.apache.org/jira/browse/GEODE-9582
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>  Labels: pull-request-available
>
> The GlobPattern class converts a user's glob pattern into a pattern that is 
> compiled by the jdk's Pattern.compile method. Some character sequences will 
> cause the jdk to throw PatternSynaxException. For example giving it the bytes 
> {{stringToBytes("\C")}} causes an exception.
>  Native redis with this same pattern treats it like just "C".
>  I think we need to look at every case in which the jdk compile throws 
> PatternSynaxException and make sure GlobPattern will not submit a pattern to 
> Pattern.compile that will cause it to throw.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-9602) QueryObserver improvements

2021-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated GEODE-9602:
--
Labels: pull-request-available  (was: )

> QueryObserver improvements
> --
>
> Key: GEODE-9602
> URL: https://issues.apache.org/jira/browse/GEODE-9602
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: Alberto Gomez
>Assignee: Alberto Gomez
>Priority: Major
>  Labels: pull-request-available
>
> The QueryObserver interface allows to create classes that would be notified 
> about query events.
> The way to set a QueryObserver is by means of the QueryObserverHolder class, 
> that is able to hold a single instance of a QueryObserver.
> This mechanism currently has two problems:
>  * The QueryObserverHolder class is not thread-safe. The observer instance to 
> be returned by the getInstance() class could return an undefined value.
>  * Given that the observer is retrieved via the 
> QueryObserverHolder::getInstance() method at different points of the query 
> execution, it is possible that if several queries with different observers 
> are run in parallel, the observers for the queries are changed in the middle 
> of the query execution with unexpected results for the queries.
> In order to solve the above problems, the following is proposed:
>  * Make the QueryObserverHolder class thread-safe.
>  * Allow for having an observer per query. A simple way to allow this is to 
> set the observer in the query context when the query is started. That way, 
> several queries could be run in parallel, each with its own observer.
> Apart from the above, it's been observed that there are no QueryObserver 
> before/after iteration evaluation 'callbacks' invoked when the query is using 
> indexes.
> It is also proposed to add these before/after IterationEvaluation callbacks 
> so that they are called also when the query is using indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-9576) InternalFunctionInvocationTargetException when executing single hop function all buckets

2021-09-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415901#comment-17415901
 ] 

ASF GitHub Bot commented on GEODE-9576:
---

jvarenina commented on a change in pull request #864:
URL: https://github.com/apache/geode-native/pull/864#discussion_r709813659



##
File path: cppcache/src/ClientMetadataService.cpp
##
@@ -578,12 +579,10 @@ ClientMetadataService::pruneNodes(
 const auto locations = metadata->adviseServerLocations(bucketId);
 if (locations.size() == 0) {
   LOGDEBUG(
-  "ClientMetadataService::pruneNodes Since no server location "
-  "available for bucketId = %d  putting it into "
-  "bucketSetWithoutServer ",
+  "ClientMetadataService::pruneNodes Use non single-hop path "
+  "since no server location is available for bucketId = %d",
   bucketId);
-  bucketSetWithoutServer.insert(bucketId);
-  continue;
+  return nullptr;

Review comment:
   Hi @echobravopapa ,
   
   Is this integration test OK to you? Maybe to remove request change if there 
is nothing else to add?
   
   Thanks/Jakov 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> InternalFunctionInvocationTargetException when executing single hop function 
> all buckets
> 
>
> Key: GEODE-9576
> URL: https://issues.apache.org/jira/browse/GEODE-9576
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jakov Varenina
>Assignee: Jakov Varenina
>Priority: Major
>  Labels: pull-request-available
>
>  *InternalFunctionInvocationTargetException: Multiple target nodes found for 
> single hop operation* occurs on native client when executing function in a 
> single hop manner for all buckets during the period when client bucket 
> metadata doesn't contain all buckets locations.
> Java client in this case executes functions in non single hop manner until it 
> receives locations of all buckets on servers. The solution in native client 
> would be to implement the same handling as in java client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)