[jira] [Resolved] (GEODE-9322) Solve potential race condition in TransactionCleaningTest
[ https://issues.apache.org/jira/browse/GEODE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Salazar de Torres resolved GEODE-9322. Resolution: Fixed > Solve potential race condition in TransactionCleaningTest > - > > Key: GEODE-9322 > URL: https://issues.apache.org/jira/browse/GEODE-9322 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > A possible race condition was detected in this new IT. > Given there is no check for servers start/stop, it might happen that the test > proceeds before the server is actually stopped/started. > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Closed] (GEODE-9322) Solve potential race condition in TransactionCleaningTest
[ https://issues.apache.org/jira/browse/GEODE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Salazar de Torres closed GEODE-9322. -- > Solve potential race condition in TransactionCleaningTest > - > > Key: GEODE-9322 > URL: https://issues.apache.org/jira/browse/GEODE-9322 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > A possible race condition was detected in this new IT. > Given there is no check for servers start/stop, it might happen that the test > proceeds before the server is actually stopped/started. > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (GEODE-9941) Coredump during PdxSerializable object deserialization
[ https://issues.apache.org/jira/browse/GEODE-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Salazar de Torres resolved GEODE-9941. Resolution: Duplicate > Coredump during PdxSerializable object deserialization > -- > > Key: GEODE-9941 > URL: https://issues.apache.org/jira/browse/GEODE-9941 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > > *GIVEN* a cluster with a single server and a single locator with a > PdxSerializable like class implementation named Order > *AND* a geode-native client with 1 PdxSerializable class implementation named > Order, matching the implementation on the cluster > *AND* also on-client-disconnect-clear-pdxType-Ids=true in client configuration > *WHEN* an Order object is tried to be deserialized > *WHILE* the cluster is being restarted > *THEN* a coredump happens given that PdxType=nullptr > — > {*}Additional information{*}. As seen by early troubleshooting, the coredump > happens because the pdx type is tried to be fetched from the PdxTypeRegistry > by its class name, but the PdxTypeRegistry is cleaned up during serialization > given that subscription redundancy was lost. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Closed] (GEODE-9941) Coredump during PdxSerializable object deserialization
[ https://issues.apache.org/jira/browse/GEODE-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Salazar de Torres closed GEODE-9941. -- > Coredump during PdxSerializable object deserialization > -- > > Key: GEODE-9941 > URL: https://issues.apache.org/jira/browse/GEODE-9941 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > > *GIVEN* a cluster with a single server and a single locator with a > PdxSerializable like class implementation named Order > *AND* a geode-native client with 1 PdxSerializable class implementation named > Order, matching the implementation on the cluster > *AND* also on-client-disconnect-clear-pdxType-Ids=true in client configuration > *WHEN* an Order object is tried to be deserialized > *WHILE* the cluster is being restarted > *THEN* a coredump happens given that PdxType=nullptr > — > {*}Additional information{*}. As seen by early troubleshooting, the coredump > happens because the pdx type is tried to be fetched from the PdxTypeRegistry > by its class name, but the PdxTypeRegistry is cleaned up during serialization > given that subscription redundancy was lost. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (GEODE-10276) Refactor PDX (de)serialziation code to align it with Java client
Mario Salazar de Torres created GEODE-10276: --- Summary: Refactor PDX (de)serialziation code to align it with Java client Key: GEODE-10276 URL: https://issues.apache.org/jira/browse/GEODE-10276 Project: Geode Issue Type: Improvement Components: native client Reporter: Mario Salazar de Torres Currently there are the following open issues regarding PDX (de)serialization: * [GEODE-9968 - Fix deserialization for new fields in PdxSerializable class|https://issues.apache.org/jira/browse/GEODE-9968] * [GEODE-9753 - Coredump during PdxSerializable object serialization|https://issues.apache.org/jira/browse/GEODE-9753] * [GEODE-10220 - Coredump while initializing PdxType remoteToLocal|https://issues.apache.org/jira/browse/GEODE-10220] * [GEODE-10255 - PdxSerializable not working correctly for multiple versions of the same class|https://issues.apache.org/jira/browse/GEODE-10255] Also, the implementation on this ticket ([GEODE-8212: Reduce connections to server to get type id|https://issues.apache.org/jira/browse/GEODE-8212]) poses some issues with PDX entries which fields are a permutation. Thing is that PdxTypes which fields are a permutation might use the wrong offsets, leading to a corrupt serialization. This is something that was not taken into account at the time of getting this PR merged. So this ticket should be reverted and possibly an alternative solution proposed. In order to tackle these issues, a code refactoring is needed to introduce the following implementations: * Single type of PdxWriter * An implementation PdxReader that tracks unread data, and other that don't. * An implementation for PdxInstances that guarantees that fields are actually written in alphabetical order, independently of the writeFields call order. This should tackle the issue described above regarding GEODE-8212. * Also, it'd be ideal to make it so PDX code is cleaner and easier to understand, though that's a complex matter, and also, subjective. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (GEODE-10276) Refactor PDX (de)serialziation code to align it with Java client
[ https://issues.apache.org/jira/browse/GEODE-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Salazar de Torres reassigned GEODE-10276: --- Assignee: Mario Salazar de Torres > Refactor PDX (de)serialziation code to align it with Java client > > > Key: GEODE-10276 > URL: https://issues.apache.org/jira/browse/GEODE-10276 > Project: Geode > Issue Type: Improvement > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > > Currently there are the following open issues regarding PDX (de)serialization: > * [GEODE-9968 - Fix deserialization for new fields in PdxSerializable > class|https://issues.apache.org/jira/browse/GEODE-9968] > * [GEODE-9753 - Coredump during PdxSerializable object > serialization|https://issues.apache.org/jira/browse/GEODE-9753] > * [GEODE-10220 - Coredump while initializing PdxType > remoteToLocal|https://issues.apache.org/jira/browse/GEODE-10220] > * [GEODE-10255 - PdxSerializable not working correctly for multiple versions > of the same class|https://issues.apache.org/jira/browse/GEODE-10255] > Also, the implementation on this ticket ([GEODE-8212: Reduce connections to > server to get type id|https://issues.apache.org/jira/browse/GEODE-8212]) > poses some issues with PDX entries which fields are a permutation. Thing is > that PdxTypes which fields are a permutation might use the wrong offsets, > leading to a corrupt serialization. This is something that was not taken into > account at the time of getting this PR merged. > So this ticket should be reverted and possibly an alternative solution > proposed. > In order to tackle these issues, a code refactoring is needed to introduce > the following implementations: > * Single type of PdxWriter > * An implementation PdxReader that tracks unread data, and other that don't. > * An implementation for PdxInstances that guarantees that fields are > actually written in alphabetical order, independently of the writeFields call > order. This should tackle the issue described above regarding GEODE-8212. > * Also, it'd be ideal to make it so PDX code is cleaner and easier to > understand, though that's a complex matter, and also, subjective. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option
[ https://issues.apache.org/jira/browse/GEODE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Ivanac reassigned GEODE-10277: Assignee: Mario Ivanac > Exception thrown when checking gatewaySender EventQueueSize, while restarting > gateway sender with clean queue option > > > Key: GEODE-10277 > URL: https://issues.apache.org/jira/browse/GEODE-10277 > Project: Geode > Issue Type: Bug > Components: statistics >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: needsTriage > > In case we are checking EventQueueSize in server with full parallel gateway > sender queue, and gateway sender is restarted with --cleanqueue option, > NullPointerException occures in JMX client. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option
[ https://issues.apache.org/jira/browse/GEODE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10277: -- Labels: needsTriage (was: ) > Exception thrown when checking gatewaySender EventQueueSize, while restarting > gateway sender with clean queue option > > > Key: GEODE-10277 > URL: https://issues.apache.org/jira/browse/GEODE-10277 > Project: Geode > Issue Type: Bug > Components: statistics >Reporter: Mario Ivanac >Priority: Major > Labels: needsTriage > > In case we are checking EventQueueSize in server with full parallel gateway > sender queue, and gateway sender is restarted with --cleanqueue option, > NullPointerException occures in JMX client. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option
Mario Ivanac created GEODE-10277: Summary: Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option Key: GEODE-10277 URL: https://issues.apache.org/jira/browse/GEODE-10277 Project: Geode Issue Type: Bug Components: statistics Reporter: Mario Ivanac In case we are checking EventQueueSize in server with full parallel gateway sender queue, and gateway sender is restarted with --cleanqueue option, NullPointerException occures in JMX client. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10268) Peer-to-peer connection due to race condition overtakes the --server-port causing server to hang during startup
[ https://issues.apache.org/jira/browse/GEODE-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531725#comment-17531725 ] Jakov Varenina commented on GEODE-10268: Hi Dan, thank you again for your help. It would be worth having this guideline in the Apache Geode user guide. I will create a Jira ticket to document it if you agree? > Peer-to-peer connection due to race condition overtakes the --server-port > causing server to hang during startup > --- > > Key: GEODE-10268 > URL: https://issues.apache.org/jira/browse/GEODE-10268 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Attachments: reproducedBindRejectedDueToPeerToPeer.txt > > > {color:#0e101a}The issue is reproduced > ({color}[^reproducedBindRejectedDueToPeerToPeer.txt]{color:#0e101a} ) with > the patch at creating the acceptor that tries to bind the port already used > in peer-to-peer connection. The problem is that distributed system starts > before the client/server connection acceptor listener. Because of that, a > peer-to-peer connection may take the port configured in the --server-port > parameter. Also, it seems that these peer-to-peer connections take ports > outside the range configured {color}*{color:#0e101a}with the > membership-port-range{color}* {color:#0e101a}parameter:{color} > {code:java} > [vm1] membership-port-range=41000-61000{code} > The peer-to-peer connection: > {color:#0e101a}[vm1] [debug 2022/05/02 *11:15:57.968* {color}CEST server-1 > tid=0x1a] starting peer-to-peer > handshake on socket > Socket[addr=/192.168.1.36,port=49913,{color:#de350b}*localport=37392*{color}] > Server try to create acceptor later on: > {code:java} > [vm1] exeption for java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] [info 2022/05/02 11:16:00.421 CEST server-1 Connection(1)-192.168.1.36> tid=0x1a] Got result: EXCEPTION_OCCURRED > [vm1] java.lang.RuntimeException: unable to start server > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.startServer(ServerStarterRule.java:225) > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.before(ServerStarterRule.java:99) > [vm1] at > org.apache.geode.test.dunit.rules.ClusterStartupRule.lambda$startServerVM$6d6c10c2$1(ClusterStartupRule.java:284) > [vm1] at > org.apache.geode.test.dunit.internal.IdentifiableCallable.call(IdentifiableCallable.java:41) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > org.apache.geode.test.dunit.internal.MethodInvoker.executeObject(MethodInvoker.java:123) > [vm1] at > org.apache.geode.test.dunit.internal.RemoteDUnitVM.executeMethodOnObject(RemoteDUnitVM.java:78) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:200) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:197) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at sun.rmi.transport.Transport.serviceCall(Transport.java:196) > [vm1] at > sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687) > [vm1] at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [vm1] at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [vm1] at java.lang.Thread.run(Thread.java:748) > [vm1] Caused by: java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] at > org.apache.geode.distributed.internal.tcpserver.ClusterSocketCreatorImpl.createServerSocket(ClusterSocketCreatorImpl.java:75) > [vm1] at > org.
[jira] [Assigned] (GEODE-10268) Peer-to-peer connection due to race condition overtakes the --server-port causing server to hang during startup
[ https://issues.apache.org/jira/browse/GEODE-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina reassigned GEODE-10268: -- Assignee: (was: Jakov Varenina) > Peer-to-peer connection due to race condition overtakes the --server-port > causing server to hang during startup > --- > > Key: GEODE-10268 > URL: https://issues.apache.org/jira/browse/GEODE-10268 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Priority: Major > Attachments: reproducedBindRejectedDueToPeerToPeer.txt > > > {color:#0e101a}The issue is reproduced > ({color}[^reproducedBindRejectedDueToPeerToPeer.txt]{color:#0e101a} ) with > the patch at creating the acceptor that tries to bind the port already used > in peer-to-peer connection. The problem is that distributed system starts > before the client/server connection acceptor listener. Because of that, a > peer-to-peer connection may take the port configured in the --server-port > parameter. Also, it seems that these peer-to-peer connections take ports > outside the range configured {color}*{color:#0e101a}with the > membership-port-range{color}* {color:#0e101a}parameter:{color} > {code:java} > [vm1] membership-port-range=41000-61000{code} > The peer-to-peer connection: > {color:#0e101a}[vm1] [debug 2022/05/02 *11:15:57.968* {color}CEST server-1 > tid=0x1a] starting peer-to-peer > handshake on socket > Socket[addr=/192.168.1.36,port=49913,{color:#de350b}*localport=37392*{color}] > Server try to create acceptor later on: > {code:java} > [vm1] exeption for java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] [info 2022/05/02 11:16:00.421 CEST server-1 Connection(1)-192.168.1.36> tid=0x1a] Got result: EXCEPTION_OCCURRED > [vm1] java.lang.RuntimeException: unable to start server > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.startServer(ServerStarterRule.java:225) > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.before(ServerStarterRule.java:99) > [vm1] at > org.apache.geode.test.dunit.rules.ClusterStartupRule.lambda$startServerVM$6d6c10c2$1(ClusterStartupRule.java:284) > [vm1] at > org.apache.geode.test.dunit.internal.IdentifiableCallable.call(IdentifiableCallable.java:41) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > org.apache.geode.test.dunit.internal.MethodInvoker.executeObject(MethodInvoker.java:123) > [vm1] at > org.apache.geode.test.dunit.internal.RemoteDUnitVM.executeMethodOnObject(RemoteDUnitVM.java:78) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:200) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:197) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at sun.rmi.transport.Transport.serviceCall(Transport.java:196) > [vm1] at > sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687) > [vm1] at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [vm1] at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [vm1] at java.lang.Thread.run(Thread.java:748) > [vm1] Caused by: java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] at > org.apache.geode.distributed.internal.tcpserver.ClusterSocketCreatorImpl.createServerSocket(ClusterSocketCreatorImpl.java:75) > [vm1] at > org.apache.geode.internal.net.SCClusterSocketCreator.createServerSocket(SCClusterSocketCreator.java:55) > [vm1] at > org.apache.geode.internal.net.SocketCreator.createServerSocket(SocketCreator.java:491) > [vm1] at
[jira] [Resolved] (GEODE-10268) Peer-to-peer connection due to race condition overtakes the --server-port causing server to hang during startup
[ https://issues.apache.org/jira/browse/GEODE-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina resolved GEODE-10268. Resolution: Not A Bug > Peer-to-peer connection due to race condition overtakes the --server-port > causing server to hang during startup > --- > > Key: GEODE-10268 > URL: https://issues.apache.org/jira/browse/GEODE-10268 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Priority: Major > Attachments: reproducedBindRejectedDueToPeerToPeer.txt > > > {color:#0e101a}The issue is reproduced > ({color}[^reproducedBindRejectedDueToPeerToPeer.txt]{color:#0e101a} ) with > the patch at creating the acceptor that tries to bind the port already used > in peer-to-peer connection. The problem is that distributed system starts > before the client/server connection acceptor listener. Because of that, a > peer-to-peer connection may take the port configured in the --server-port > parameter. Also, it seems that these peer-to-peer connections take ports > outside the range configured {color}*{color:#0e101a}with the > membership-port-range{color}* {color:#0e101a}parameter:{color} > {code:java} > [vm1] membership-port-range=41000-61000{code} > The peer-to-peer connection: > {color:#0e101a}[vm1] [debug 2022/05/02 *11:15:57.968* {color}CEST server-1 > tid=0x1a] starting peer-to-peer > handshake on socket > Socket[addr=/192.168.1.36,port=49913,{color:#de350b}*localport=37392*{color}] > Server try to create acceptor later on: > {code:java} > [vm1] exeption for java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] [info 2022/05/02 11:16:00.421 CEST server-1 Connection(1)-192.168.1.36> tid=0x1a] Got result: EXCEPTION_OCCURRED > [vm1] java.lang.RuntimeException: unable to start server > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.startServer(ServerStarterRule.java:225) > [vm1] at > org.apache.geode.test.junit.rules.ServerStarterRule.before(ServerStarterRule.java:99) > [vm1] at > org.apache.geode.test.dunit.rules.ClusterStartupRule.lambda$startServerVM$6d6c10c2$1(ClusterStartupRule.java:284) > [vm1] at > org.apache.geode.test.dunit.internal.IdentifiableCallable.call(IdentifiableCallable.java:41) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > org.apache.geode.test.dunit.internal.MethodInvoker.executeObject(MethodInvoker.java:123) > [vm1] at > org.apache.geode.test.dunit.internal.RemoteDUnitVM.executeMethodOnObject(RemoteDUnitVM.java:78) > [vm1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [vm1] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [vm1] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [vm1] at java.lang.reflect.Method.invoke(Method.java:498) > [vm1] at > sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:200) > [vm1] at sun.rmi.transport.Transport$1.run(Transport.java:197) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at sun.rmi.transport.Transport.serviceCall(Transport.java:196) > [vm1] at > sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688) > [vm1] at java.security.AccessController.doPrivileged(Native Method) > [vm1] at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687) > [vm1] at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [vm1] at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [vm1] at java.lang.Thread.run(Thread.java:748) > [vm1] Caused by: java.net.BindException: Failed to create server socket on > 192.168.1.36[37392] > [vm1] at > org.apache.geode.distributed.internal.tcpserver.ClusterSocketCreatorImpl.createServerSocket(ClusterSocketCreatorImpl.java:75) > [vm1] at > org.apache.geode.internal.net.SCClusterSocketCreator.createServerSocket(SCClusterSocketCreator.java:55) > [vm1] at > org.apache.geode.internal.net.SocketCreator.createServerSocket(SocketCreator.java:491) > [vm1] at > org.apache.geod
[jira] [Assigned] (GEODE-10264) Geode user guide: remove the "Connect to the server from your application" section
[ https://issues.apache.org/jira/browse/GEODE-10264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Max Hufnagel reassigned GEODE-10264: Assignee: Max Hufnagel > Geode user guide: remove the "Connect to the server from your application" > section > -- > > Key: GEODE-10264 > URL: https://issues.apache.org/jira/browse/GEODE-10264 > Project: Geode > Issue Type: Bug > Components: docs >Affects Versions: 1.14.4 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > > Community member John Martin reported: > We need to remove the "Connect to the server from your application" section > from this page please: > https://geode.apache.org/docs/guide/114/getting_started/intro_to_clients.html > it is not accurate, and was a section I thought I had deleted before > submitting, but apparently I had not. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10264) Geode user guide: remove the "Connect to the server from your application" section
[ https://issues.apache.org/jira/browse/GEODE-10264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531730#comment-17531730 ] Max Hufnagel commented on GEODE-10264: -- https://geode.apache.org/docs/guide/114/getting_started/intro_to_clients.html no longer contains the following content: *Connect to the server from your application* {{import org.apache.geode.cache.client.ClientCache;}} {{import org.apache.geode.cache.client.ClientCacheFactory;}} {{public class HelloWorld {}} {{ public static void main(String[] args) {}} {{ ClientCache cache = new ClientCacheFactory().addPoolLocator("127.0.0.1", 10334).create();}} {{ System.out.println(cache.getDefaultPool().getLocators());}} {{ cache.close();}} {{ }}} {{}}} The information printed out should match the host and port of your Apache Geode instance locators and should resemble {{[/127.0.0.1:10334]}} > Geode user guide: remove the "Connect to the server from your application" > section > -- > > Key: GEODE-10264 > URL: https://issues.apache.org/jira/browse/GEODE-10264 > Project: Geode > Issue Type: Bug > Components: docs >Affects Versions: 1.14.4 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > > Community member John Martin reported: > We need to remove the "Connect to the server from your application" section > from this page please: > https://geode.apache.org/docs/guide/114/getting_started/intro_to_clients.html > it is not accurate, and was a section I thought I had deleted before > submitting, but apparently I had not. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-6183) CI Failure: LocatorLauncherRemoteFileIntegrationTest.startDeletesStaleControlFiles failed with ConditionTimeoutException
[ https://issues.apache.org/jira/browse/GEODE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531742#comment-17531742 ] Geode Integration commented on GEODE-6183: -- Seen on support/1.14 in [integration-test-openjdk8 #57|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-support-1-14-main/jobs/integration-test-openjdk8/builds/57] ... see [test results|http://files.apachegeode-ci.info/builds/apache-support-1-14-main/1.14.5-build.0961/test-results/integrationTest/1651625071/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-support-1-14-main/1.14.5-build.0961/test-artifacts/1651625071/integrationtestfiles-openjdk8-1.14.5-build.0961.tgz]. > CI Failure: > LocatorLauncherRemoteFileIntegrationTest.startDeletesStaleControlFiles failed > with ConditionTimeoutException > > > Key: GEODE-6183 > URL: https://issues.apache.org/jira/browse/GEODE-6183 > Project: Geode > Issue Type: Bug > Components: build >Affects Versions: 1.14.0, 1.15.0 >Reporter: Eric Shu >Assignee: Kirk Lund >Priority: Major > Time Spent: 5h 50m > Remaining Estimate: 0h > > Test failed in > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/IntegrationTestOpenJDK8/builds/223 > org.apache.geode.distributed.LocatorLauncherRemoteFileIntegrationTest > > startDeletesStaleControlFiles FAILED > org.awaitility.core.ConditionTimeoutException: Assertion condition > defined as a lambda expression in > org.apache.geode.distributed.LocatorLauncherRemoteIntegrationTestCase that > uses org.apache.geode.distributed.LocatorLauncher expected:<[online]> but > was:<[not responding]> within 300 seconds. > Caused by: > org.junit.ComparisonFailure: expected:<[online]> but was:<[not > responding]> -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-5782) LauncherMemberMXBeanIntegrationTest can fail intermittently
[ https://issues.apache.org/jira/browse/GEODE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531745#comment-17531745 ] Geode Integration commented on GEODE-5782: -- Seen on support/1.12 in [windows-core-integration-test-openjdk8 #63|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-support-1-12-main/jobs/windows-core-integration-test-openjdk8/builds/63] ... see [test results|http://files.apachegeode-ci.info/builds/apache-support-1-12-main/1.12.10-build.0383/test-results/integrationTest/1651638828/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-support-1-12-main/1.12.10-build.0383/test-artifacts/1651638828/windows-coreintegrationtestfiles-openjdk8-1.12.10-build.0383.tgz]. > LauncherMemberMXBeanIntegrationTest can fail intermittently > --- > > Key: GEODE-5782 > URL: https://issues.apache.org/jira/browse/GEODE-5782 > Project: Geode > Issue Type: Bug > Components: jmx >Affects Versions: 1.9.0 >Reporter: Jens Deppe >Assignee: Jens Deppe >Priority: Major > Fix For: 1.14.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Noticed this failure: > {noformat} > org.apache.geode.distributed.LauncherMemberMXBeanIntegrationTest > > showOSMetrics_reconstructsOSMetricsFromCompositeDataType FAILED > org.junit.ComparisonFailure: expected:<204.[68]> but was:<204.[55]> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > org.apache.geode.distributed.LauncherMemberMXBeanIntegrationTest.showOSMetrics_reconstructsOSMetricsFromCompositeDataType(LauncherMemberMXBeanIntegrationTest.java:143) > {noformat} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-9890) DistributedAckRegionCCEDUnitTest > testClearOnNonReplicateWithConcurrentEvents FAILED
[ https://issues.apache.org/jira/browse/GEODE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531749#comment-17531749 ] Geode Integration commented on GEODE-9890: -- Seen on support/1.12 in [distributed-test-openjdk11 #66|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-support-1-12-main/jobs/distributed-test-openjdk11/builds/66] ... see [test results|http://files.apachegeode-ci.info/builds/apache-support-1-12-main/1.12.10-build.0383/test-results/distributedTest/1651631246/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-support-1-12-main/1.12.10-build.0383/test-artifacts/1651631246/distributedtestfiles-openjdk11-1.12.10-build.0383.tgz]. > DistributedAckRegionCCEDUnitTest > > testClearOnNonReplicateWithConcurrentEvents FAILED > - > > Key: GEODE-9890 > URL: https://issues.apache.org/jira/browse/GEODE-9890 > Project: Geode > Issue Type: Bug > Components: client/server >Affects Versions: 1.12.0 >Reporter: Ray Ingles >Priority: Major > > This has similar behavior to GEODE-7702, but Gester states that the > underlying cause doesn't apply to 1.12, so we're opening up a new ticket. The > error seen is: > > {{> Task :geode-core:distributedTest}} > {{org.apache.geode.cache30.DistributedAckRegionCCEDUnitTest > > testClearOnNonReplicateWithConcurrentEvents FAILED}} > {{org.awaitility.core.ConditionTimeoutException: Assertion condition > defined as a lambda expression in > org.apache.geode.cache30.MultiVMRegionTestCase expected:<[6]> but was:<[3]> > within 300 seconds.}} > {{at > org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:145)}} > {{at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:122)}} > {{at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:32)}} > {{at > org.awaitility.core.ConditionFactory.until(ConditionFactory.java:902)}} > {{at > org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:723)}} > {{at > org.apache.geode.cache30.MultiVMRegionTestCase.versionTestClearOnNonReplicateWithConcurrentEvents(MultiVMRegionTestCase.java:6447)}} > {{at > org.apache.geode.cache30.DistributedAckRegionCCEDUnitTest.testClearOnNonReplicateWithConcurrentEvents(DistributedAckRegionCCEDUnitTest.java:268)}} > {{Caused by:}} > {{org.junit.ComparisonFailure: expected:<[6]> but was:<[3]>}} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10215) WAN replication not working after re-creating the partitioned region
[ https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531771#comment-17531771 ] Dave Barnes commented on GEODE-10215: - I have approved adding a warning to the docs, because it's better than nothing. But it feels to me like an incomplete remedy for a design flaw. > WAN replication not working after re-creating the partitioned region > > > Key: GEODE-10215 > URL: https://issues.apache.org/jira/browse/GEODE-10215 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Labels: pull-request-available > > Steps to reproduce the issue: > Start multi-site with at least 3 servers on each site. If there are less than > three servers then issue will not reproduce. > Configuration site 1: > {code:java} > create disk-store --name=queue_disk_store --dir=ds2 > create gateway-sender -id="remote_site_2" --parallel="true" > --remote-distributed-system-id="1" -enable-persistence=true > --disk-store-name=queue_disk_store > create disk-store --name=data_disk_store --dir=ds1 > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #Configure the remote site 2 with the region and the gateway-receiver > #Run some traffic so that all buckets are created and data is replicated to > the other site > alter region --name=/example-region --gateway-sender-id="" > destroy region --name=/example-region > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #run traffic to see that some data is not replicated to the remote site 2 > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10215) WAN replication not working after re-creating the partitioned region
[ https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531772#comment-17531772 ] ASF subversion and git services commented on GEODE-10215: - Commit ff9b3be5e3a11ac227856065f1a602b2c72a5229 in geode's branch refs/heads/develop from Jakov Varenina [ https://gitbox.apache.org/repos/asf?p=geode.git;h=ff9b3be5e3 ] GEODE-10215: Document warning for parallel gws (#7623) > WAN replication not working after re-creating the partitioned region > > > Key: GEODE-10215 > URL: https://issues.apache.org/jira/browse/GEODE-10215 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Labels: pull-request-available > > Steps to reproduce the issue: > Start multi-site with at least 3 servers on each site. If there are less than > three servers then issue will not reproduce. > Configuration site 1: > {code:java} > create disk-store --name=queue_disk_store --dir=ds2 > create gateway-sender -id="remote_site_2" --parallel="true" > --remote-distributed-system-id="1" -enable-persistence=true > --disk-store-name=queue_disk_store > create disk-store --name=data_disk_store --dir=ds1 > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #Configure the remote site 2 with the region and the gateway-receiver > #Run some traffic so that all buckets are created and data is replicated to > the other site > alter region --name=/example-region --gateway-sender-id="" > destroy region --name=/example-region > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #run traffic to see that some data is not replicated to the remote site 2 > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10196) Multiple geode-for-redis DUnitTests fail to ignore expected exceptions on JDK 17
[ https://issues.apache.org/jira/browse/GEODE-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531773#comment-17531773 ] Geode Integration commented on GEODE-10196: --- Seen in [distributed-test-openjdk17 #7|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/distributed-test-openjdk17/builds/7] ... see [test results|http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1145/test-results/distributedTest/1651629968/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-develop-main/1.15.0-build.1145/test-artifacts/1651629968/distributedtestfiles-openjdk17-1.15.0-build.1145.tgz]. > Multiple geode-for-redis DUnitTests fail to ignore expected exceptions on JDK > 17 > > > Key: GEODE-10196 > URL: https://issues.apache.org/jira/browse/GEODE-10196 > Project: Geode > Issue Type: Improvement > Components: tests >Affects Versions: 1.15.0 >Reporter: Dale Emery >Assignee: Darrel Schneider >Priority: Major > Labels: Java17, pull-request-available > Fix For: 1.15.0 > > > The {{HashesAndCrashesDUnitTest.executeUntilSuccess()}} method (called by all > of the test in the class) expects exceptions with the message "Connection > reset by peer", logs them, and retries the operation. > > The source of the exception is {{{}SocketChannel.read(){}}}. On JDK 17, the > exception message is "Connection reset". This is not the expected message, > and so {{executeUntilSuccess()}} rethrows it instead of ignoring it, causing > the test to fail. > > Incidentally, the type of the exception on JDK 17 {{{}SocketException{}}}, > but on JDK 8 and 11 is {{{}IOException{}}}. This does not affect the test, > which inspects only the exception message, not the type. > > On JDK 17, the stack trace of the exception is: > {noformat} > io.lettuce.core.RedisException: java.net.SocketException: Connection reset > at io.lettuce.core.internal.Exceptions.bubble(Exceptions.java:83) > at io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:250) > at > io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:130) > at > io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) > at jdk.proxy3/jdk.proxy3.$Proxy53.set(Unknown Source) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.lambda$setPerformAndVerify$14(HashesAndCrashesDUnitTest.java:257) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.executeUntilSuccess(HashesAndCrashesDUnitTest.java:274) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.setPerformAndVerify(HashesAndCrashesDUnitTest.java:257) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$11(HashesAndCrashesDUnitTest.java:161) > at > java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base/java.lang.Thread.run(Thread.java:833) > Caused by: java.net.SocketException: Connection reset > at > java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) > at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) > at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) > at > io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) > at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) > at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) > at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > ... 1 more > {noformat} > > On JDK 11, the stack trace of the exception is: > {noformat} > io.lettuce.core.RedisException: java.io.IOException: Connection reset by peer > at io.lettuce.core.internal.Exceptions.bubble(Exceptions.java:83) > at io.lettuce.core.int
[jira] [Comment Edited] (GEODE-10215) WAN replication not working after re-creating the partitioned region
[ https://issues.apache.org/jira/browse/GEODE-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531771#comment-17531771 ] Dave Barnes edited comment on GEODE-10215 at 5/4/22 2:59 PM: - I have approved adding a warning to the docs, because it's better than nothing. But it feels to me like an incomplete remedy for a design flaw. I leave it to others to determine whether this truly 'resolves' the bug. was (Author: dbarnes97): I have approved adding a warning to the docs, because it's better than nothing. But it feels to me like an incomplete remedy for a design flaw. > WAN replication not working after re-creating the partitioned region > > > Key: GEODE-10215 > URL: https://issues.apache.org/jira/browse/GEODE-10215 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Labels: pull-request-available > > Steps to reproduce the issue: > Start multi-site with at least 3 servers on each site. If there are less than > three servers then issue will not reproduce. > Configuration site 1: > {code:java} > create disk-store --name=queue_disk_store --dir=ds2 > create gateway-sender -id="remote_site_2" --parallel="true" > --remote-distributed-system-id="1" -enable-persistence=true > --disk-store-name=queue_disk_store > create disk-store --name=data_disk_store --dir=ds1 > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #Configure the remote site 2 with the region and the gateway-receiver > #Run some traffic so that all buckets are created and data is replicated to > the other site > alter region --name=/example-region --gateway-sender-id="" > destroy region --name=/example-region > create region --name=example-region --type=PARTITION_PERSISTENT > --gateway-sender-id="remote_site_2" --disk-store=data_disk_store > --total-num-buckets=1103 --redundant-copies=1 --enable-synchronous-disk=false > #run traffic to see that some data is not replicated to the remote site 2 > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10277) Exception thrown when checking gatewaySender EventQueueSize, while restarting gateway sender with clean queue option
[ https://issues.apache.org/jira/browse/GEODE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10277: --- Labels: needsTriage pull-request-available (was: needsTriage) > Exception thrown when checking gatewaySender EventQueueSize, while restarting > gateway sender with clean queue option > > > Key: GEODE-10277 > URL: https://issues.apache.org/jira/browse/GEODE-10277 > Project: Geode > Issue Type: Bug > Components: statistics >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: needsTriage, pull-request-available > > In case we are checking EventQueueSize in server with full parallel gateway > sender queue, and gateway sender is restarted with --cleanqueue option, > NullPointerException occures in JMX client. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (GEODE-9921) Rename .NET client to .NET Framework
[ https://issues.apache.org/jira/browse/GEODE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Max Hufnagel reassigned GEODE-9921: --- Assignee: Max Hufnagel (was: Dave Barnes) > Rename .NET client to .NET Framework > > > Key: GEODE-9921 > URL: https://issues.apache.org/jira/browse/GEODE-9921 > Project: Geode > Issue Type: Improvement > Components: docs, native client >Affects Versions: 1.14.2 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > > The .NET native client docs need to be renamed to .NET Framework to clarify > that it is not .NET Core -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-9921) Rename .NET client to .NET Framework
[ https://issues.apache.org/jira/browse/GEODE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531874#comment-17531874 ] Max Hufnagel commented on GEODE-9921: - ".NET" changed to ".NET Framework" in the following topics: * [https://geode.apache.org/docs/geode-native/dotnet/114/about-client-users-guide.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/client-cache-ref.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/config-client-cache.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/sysprops.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/continuous-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/function-execution.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/app-dev-walkthrough-dotnet.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/getting-started-nc-client.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/put-get-example.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/preserving-data/config-durable-reconnect.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/regions/regions.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/regions/registering-interest-for-entries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/remote-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/security/authentication.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/data-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/dotnet-pdx-autoserializer.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/dotnet-pdx-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/pdx-serializable-examples.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/serialize-using-ipdxserializable.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/transactions.html] === * [https://geode.apache.org/docs/geode-native/dotnet/112/about-client-users-guide.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/client-cache-ref.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/configuring/config-client-cache.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/configuring/sysprops.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/continuous-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/function-execution.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/app-dev-walkthrough-dotnet.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/getting-started-nc-client.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/put-get-example.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/preserving-data/config-durable-reconnect.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/regions/regions.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/regions/registering-interest-for-entries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/remote-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/security/authentication.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/data-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/dotnet-pdx-autoserializer.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/dotnet-pdx-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/pdx-serializable-examples.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/serialize-using-ipdxserializable.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/transactions.html] > Rename .NET client to .NET Framework > > > Key: GEODE-9921 > URL: https://issues.apache.org/jira/browse/GEODE-9921 > Project: Geode > Issue Type: Improvement > Components: docs, native client >Affects Versions: 1.14.2 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > > The .NET native client docs need to be renamed to .NET Framework to clarify > that it is not .NET Core -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Comment Edited] (GEODE-9921) Rename .NET client to .NET Framework
[ https://issues.apache.org/jira/browse/GEODE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531874#comment-17531874 ] Max Hufnagel edited comment on GEODE-9921 at 5/4/22 5:54 PM: - ".NET" changed to ".NET Framework" in the following topics (where appropriate): * [https://geode.apache.org/docs/geode-native/dotnet/114/about-client-users-guide.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/client-cache-ref.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/config-client-cache.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/sysprops.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/continuous-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/function-execution.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/app-dev-walkthrough-dotnet.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/getting-started-nc-client.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/getting-started/put-get-example.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/preserving-data/config-durable-reconnect.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/regions/regions.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/regions/registering-interest-for-entries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/remote-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/security/authentication.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/data-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/dotnet-pdx-autoserializer.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/dotnet-pdx-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/pdx-serializable-examples.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/serialization/dotnet-serialization/serialize-using-ipdxserializable.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/transactions.html] === * [https://geode.apache.org/docs/geode-native/dotnet/112/about-client-users-guide.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/client-cache-ref.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/configuring/config-client-cache.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/configuring/sysprops.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/continuous-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/function-execution.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/app-dev-walkthrough-dotnet.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/getting-started-nc-client.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/getting-started/put-get-example.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/preserving-data/config-durable-reconnect.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/regions/regions.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/regions/registering-interest-for-entries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/remote-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/security/authentication.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/data-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/dotnet-pdx-autoserializer.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/dotnet-pdx-serialization.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/pdx-serializable-examples.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/serialization/dotnet-serialization/serialize-using-ipdxserializable.html] * [https://geode.apache.org/docs/geode-native/dotnet/112/transactions.html] was (Author: JIRAUSER287269): ".NET" changed to ".NET Framework" in the following topics: * [https://geode.apache.org/docs/geode-native/dotnet/114/about-client-users-guide.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/client-cache-ref.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/config-client-cache.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/configuring/sysprops.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/continuous-queries.html] * [https://geode.apache.org/docs/geode-native/dotnet/114/function-execution.html] * [https://geode.apache.org/docs/geode-n
[jira] [Commented] (GEODE-10196) Multiple geode-for-redis DUnitTests fail to ignore expected exceptions on JDK 17
[ https://issues.apache.org/jira/browse/GEODE-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531881#comment-17531881 ] Darrel Schneider commented on GEODE-10196: -- MessageDispatcher.java is not an issue because it handles both "Connection reset" and "Connection reset by peer". > Multiple geode-for-redis DUnitTests fail to ignore expected exceptions on JDK > 17 > > > Key: GEODE-10196 > URL: https://issues.apache.org/jira/browse/GEODE-10196 > Project: Geode > Issue Type: Improvement > Components: tests >Affects Versions: 1.15.0 >Reporter: Dale Emery >Assignee: Darrel Schneider >Priority: Major > Labels: Java17, pull-request-available > Fix For: 1.15.0 > > > The {{HashesAndCrashesDUnitTest.executeUntilSuccess()}} method (called by all > of the test in the class) expects exceptions with the message "Connection > reset by peer", logs them, and retries the operation. > > The source of the exception is {{{}SocketChannel.read(){}}}. On JDK 17, the > exception message is "Connection reset". This is not the expected message, > and so {{executeUntilSuccess()}} rethrows it instead of ignoring it, causing > the test to fail. > > Incidentally, the type of the exception on JDK 17 {{{}SocketException{}}}, > but on JDK 8 and 11 is {{{}IOException{}}}. This does not affect the test, > which inspects only the exception message, not the type. > > On JDK 17, the stack trace of the exception is: > {noformat} > io.lettuce.core.RedisException: java.net.SocketException: Connection reset > at io.lettuce.core.internal.Exceptions.bubble(Exceptions.java:83) > at io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:250) > at > io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:130) > at > io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) > at jdk.proxy3/jdk.proxy3.$Proxy53.set(Unknown Source) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.lambda$setPerformAndVerify$14(HashesAndCrashesDUnitTest.java:257) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.executeUntilSuccess(HashesAndCrashesDUnitTest.java:274) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.setPerformAndVerify(HashesAndCrashesDUnitTest.java:257) > at > org.apache.geode.redis.internal.commands.executor.hash.HashesAndCrashesDUnitTest.lambda$modifyDataWhileCrashingVMs$11(HashesAndCrashesDUnitTest.java:161) > at > java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base/java.lang.Thread.run(Thread.java:833) > Caused by: java.net.SocketException: Connection reset > at > java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) > at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) > at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) > at > io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) > at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) > at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) > at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > ... 1 more > {noformat} > > On JDK 11, the stack trace of the exception is: > {noformat} > io.lettuce.core.RedisException: java.io.IOException: Connection reset by peer > at io.lettuce.core.internal.Exceptions.bubble(Exceptions.java:83) > at io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:250) > at > io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:130) > at > io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) > at com.sun.proxy.$Proxy52.set(Unknown Source) > at > org.apache.geode.redis.internal.commands.exe
[jira] [Commented] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531895#comment-17531895 ] Anilkumar Gingade commented on GEODE-9484: -- [~mivanac] The PR/Fix for this issue is showing up NPE in internal tests. Here is the stack trace: java.lang.NullPointerException at org.apache.geode.internal.tcp.TCPConduit.getFirstScanForConnection(TCPConduit.java:958) at org.apache.geode.distributed.internal.direct.DirectChannel.getConnections(DirectChannel.java:477) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToMany(DirectChannel.java:277) at org.apache.geode.distributed.internal.direct.DirectChannel.send(DirectChannel.java:543) at org.apache.geode.distributed.internal.DistributionImpl.directChannelSend(DistributionImpl.java:348) at org.apache.geode.distributed.internal.DistributionImpl.send(DistributionImpl.java:293) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendViaMembershipManager(ClusterDistributionManager.java:2067) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendOutgoing(ClusterDistributionManager.java:1994) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendMessage(ClusterDistributionManager.java:2031) at org.apache.geode.distributed.internal.ClusterDistributionManager.putOutgoing(ClusterDistributionManager.java:1088) at org.apache.geode.internal.cache.CreateRegionProcessor.initializeRegion(CreateRegionProcessor.java:115) at org.apache.geode.internal.cache.DistributedRegion.getInitialImageAndRecovery(DistributedRegion.java:1176) at org.apache.geode.internal.cache.DistributedRegion.initialize(DistributedRegion.java:1107) at org.apache.geode.internal.cache.HARegion.initialize(HARegion.java:323) at org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3103) at org.apache.geode.internal.cache.HARegion.getInstance(HARegion.java:246) at org.apache.geode.internal.cache.ha.HARegionQueue.(HARegionQueue.java:365) at org.apache.geode.internal.cache.ha.HARegionQueue$BlockingHARegionQueue.(HARegionQueue.java:2233) at org.apache.geode.internal.cache.ha.HARegionQueue$DurableHARegionQueue.(HARegionQueue.java:2478) at org.apache.geode.internal.cache.ha.HARegionQueue.getHARegionQueueInstance(HARegionQueue.java:2015) at org.apache.geode.internal.cache.tier.sockets.MessageDispatcher.getMessageQueue(MessageDispatcher.java:166) at org.apache.geode.internal.cache.tier.sockets.MessageDispatcher.(MessageDispatcher.java:146) at org.apache.geode.internal.cache.tier.sockets.CacheClientProxy.createMessageDispatcher(CacheClientProxy.java:1685) at org.apache.geode.internal.cache.tier.sockets.CacheClientProxy.initializeMessageDispatcher(CacheClientProxy.java:1677) at org.apache.geode.internal.cache.tier.sockets.CacheClientNotifier.initializeProxy(CacheClientNotifier.java:502) at org.apache.geode.internal.cache.tier.sockets.CacheClientNotifier.registerClientInternal(CacheClientNotifier.java:406) at org.apache.geode.internal.cache.tier.sockets.CacheClientNotifier.registerClient(CacheClientNotifier.java:221) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$ClientQueueInitializerTask.run(AcceptorImpl.java:1896) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeClientQueueInitializerThreadPool$1(AcceptorImpl.java:678) at org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:120) at java.lang.Thread.run(Thread.java:750) > Data inconsistency in replicated region with 3 or more servers, and one > server is down > --- > > Key: GEODE-9484 > URL: https://issues.apache.org/jira/browse/GEODE-9484 > Project: Geode > Issue Type: Improvement > Components: client/server, regions >Affects Versions: 1.13.0 >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > We have configured replicated region with 3 or more servers, and client is > configured with read timeout set to value same or smaller than member timeout. > In case while client is putting data in region, one of replicated servers is > shutdown, it is observed that we have data inconsistency. >
[jira] [Commented] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531896#comment-17531896 ] Anilkumar Gingade commented on GEODE-9484: -- [~mivanac] Can you please revert these changes. Re-open the ticket, address the NPE with new additional tests. > Data inconsistency in replicated region with 3 or more servers, and one > server is down > --- > > Key: GEODE-9484 > URL: https://issues.apache.org/jira/browse/GEODE-9484 > Project: Geode > Issue Type: Improvement > Components: client/server, regions >Affects Versions: 1.13.0 >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > We have configured replicated region with 3 or more servers, and client is > configured with read timeout set to value same or smaller than member timeout. > In case while client is putting data in region, one of replicated servers is > shutdown, it is observed that we have data inconsistency. > > We see that data part of data is written in server connected with client, but > in remaining replicated servers it is missing. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Comment Edited] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531896#comment-17531896 ] Anilkumar Gingade edited comment on GEODE-9484 at 5/4/22 6:56 PM: -- [~mivanac] Can you please revert these changes. Re-open the ticket, address the NPE with new additional tests. was (Author: agingade): [~mivanac] Can you please revert these changes. Re-open the ticket, address the NPE with new additional tests. > Data inconsistency in replicated region with 3 or more servers, and one > server is down > --- > > Key: GEODE-9484 > URL: https://issues.apache.org/jira/browse/GEODE-9484 > Project: Geode > Issue Type: Improvement > Components: client/server, regions >Affects Versions: 1.13.0 >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > We have configured replicated region with 3 or more servers, and client is > configured with read timeout set to value same or smaller than member timeout. > In case while client is putting data in region, one of replicated servers is > shutdown, it is observed that we have data inconsistency. > > We see that data part of data is written in server connected with client, but > in remaining replicated servers it is missing. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Comment Edited] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531896#comment-17531896 ] Anilkumar Gingade edited comment on GEODE-9484 at 5/4/22 6:58 PM: -- [~mivanac] Can you please revert these changes. Re-open the ticket, address the NPE with new additional tests. It will be very helpful; if you could revert this sooner; this could be masking other issues, that may not get surfaced. Or we may go ahead and revert these changes. was (Author: agingade): [~mivanac] Can you please revert these changes. Re-open the ticket, address the NPE with new additional tests. > Data inconsistency in replicated region with 3 or more servers, and one > server is down > --- > > Key: GEODE-9484 > URL: https://issues.apache.org/jira/browse/GEODE-9484 > Project: Geode > Issue Type: Improvement > Components: client/server, regions >Affects Versions: 1.13.0 >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > We have configured replicated region with 3 or more servers, and client is > configured with read timeout set to value same or smaller than member timeout. > In case while client is putting data in region, one of replicated servers is > shutdown, it is observed that we have data inconsistency. > > We see that data part of data is written in server connected with client, but > in remaining replicated servers it is missing. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531898#comment-17531898 ] ASF subversion and git services commented on GEODE-9484: Commit 758ef27045019cbe5654aed42b52898cc41ceaa8 in geode's branch refs/heads/revert-7381-newfeature1/GEODE-9484 from Mario Ivanac [ https://gitbox.apache.org/repos/asf?p=geode.git;h=758ef27045 ] Revert "GEODE-9484: Improve sending message to multy destinations (#7381)" This reverts commit 62cd12c7f0bbb3d092011555e714e57ce041791a. > Data inconsistency in replicated region with 3 or more servers, and one > server is down > --- > > Key: GEODE-9484 > URL: https://issues.apache.org/jira/browse/GEODE-9484 > Project: Geode > Issue Type: Improvement > Components: client/server, regions >Affects Versions: 1.13.0 >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > We have configured replicated region with 3 or more servers, and client is > configured with read timeout set to value same or smaller than member timeout. > In case while client is putting data in region, one of replicated servers is > shutdown, it is observed that we have data inconsistency. > > We see that data part of data is written in server connected with client, but > in remaining replicated servers it is missing. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (GEODE-10207) check log levels for gfsh commands
[ https://issues.apache.org/jira/browse/GEODE-10207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Max Hufnagel reassigned GEODE-10207: Assignee: Max Hufnagel > check log levels for gfsh commands > --- > > Key: GEODE-10207 > URL: https://issues.apache.org/jira/browse/GEODE-10207 > Project: Geode > Issue Type: Sub-task > Components: docs >Affects Versions: 1.14.4 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > > Community member Tod Morrison suggests checking log levels for gfsh commands > `alter runtime` and `change loglevel`. > The levels don’t look right: > The new log level. This option is required and you must specify a value. > Valid values are: ALL, TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. > (Default is INFO) > See > https://geode.apache.org/docs/guide/114/managing/logging/setting_up_logging.html, > which shows valid choices of `severe`, `error`, `warning`, `info`, `config`, > and `fine`. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10275) bump spring to recommended version
[ https://issues.apache.org/jira/browse/GEODE-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531912#comment-17531912 ] ASF subversion and git services commented on GEODE-10275: - Commit a81c884b85961878a631ebedebb0ab98dcbf5875 in geode's branch refs/heads/develop from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=a81c884b85 ] GEODE-10275: Bump spring from 5.3.18 to 5.3.19 (#7647) Geode endeavors to update to the latest version of 3rd-party dependencies on develop wherever possible. > bump spring to recommended version > -- > > Key: GEODE-10275 > URL: https://issues.apache.org/jira/browse/GEODE-10275 > Project: Geode > Issue Type: Task >Reporter: Owen Nichols >Priority: Major > Labels: pull-request-available > Fix For: 1.12.10, 1.14.5 > > > latest (5.3.19 or 5.2.21) is recommended -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10275) bump spring to recommended version
[ https://issues.apache.org/jira/browse/GEODE-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols updated GEODE-10275: - Fix Version/s: 1.15.0 > bump spring to recommended version > -- > > Key: GEODE-10275 > URL: https://issues.apache.org/jira/browse/GEODE-10275 > Project: Geode > Issue Type: Task >Reporter: Owen Nichols >Priority: Major > Labels: pull-request-available > Fix For: 1.12.10, 1.14.5, 1.15.0 > > > latest (5.3.19 or 5.2.21) is recommended -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10046) bump dependencies in 1.16
[ https://issues.apache.org/jira/browse/GEODE-10046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531914#comment-17531914 ] ASF subversion and git services commented on GEODE-10046: - Commit b27d6a4e4794ba446e4757d0dc06e8d5bb4e878e in geode's branch refs/heads/develop from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=b27d6a4e47 ] GEODE-10046: Bump 3rd-party dependency versions (#7650) Geode endeavors to update to the latest version of 3rd-party dependencies on develop wherever possible. Doing so increases the shelf life of releases and increases security and reliability. Doing so regularly makes the occasional hiccups this can cause easier to pinpoint and address. Dependency bumps in this batch: * Bump classgraph from 4.8.145 to 4.8.146 * Bump micrometer from 1.8.4 to 1.8.5 * Bump netty-handler from 4.1.75 to 4.1.76 * Bump spring-boot-starter-web from 2.6.6 to 2.6.7 * Bump spring-hateoas from 1.4.1 to 1.4.2 * Bump spring-ldap-core from 2.3.6 to 2.3.7 * Bump spring-security from 5.6.2 to 5.6.3 > bump dependencies in 1.16 > - > > Key: GEODE-10046 > URL: https://issues.apache.org/jira/browse/GEODE-10046 > Project: Geode > Issue Type: Improvement > Components: build >Reporter: Owen Nichols >Assignee: Owen Nichols >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > until support/1.16 is cut, periodically check for and switch to latest > version of 3rd-party dependencies. this will extend the shelf-life of > eventual Geode 1.16 release and hopefully reduce bugs and cve exposure, or at > least give a smaller delta if there is later a cve found that we need to > patch for -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (GEODE-10275) bump spring to recommended version
[ https://issues.apache.org/jira/browse/GEODE-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols resolved GEODE-10275. -- Resolution: Fixed > bump spring to recommended version > -- > > Key: GEODE-10275 > URL: https://issues.apache.org/jira/browse/GEODE-10275 > Project: Geode > Issue Type: Task >Reporter: Owen Nichols >Priority: Major > Labels: pull-request-available > Fix For: 1.12.10, 1.14.5, 1.15.0 > > > latest (5.3.19 or 5.2.21) is recommended -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (GEODE-10278) Remove geode-for-redis module
Dan Smith created GEODE-10278: - Summary: Remove geode-for-redis module Key: GEODE-10278 URL: https://issues.apache.org/jira/browse/GEODE-10278 Project: Geode Issue Type: Improvement Components: redis Reporter: Dan Smith There is consensus to remove the geode-for-redis module from geode, based on this discussion thread - [https://lists.apache.org/thread/7m23h9r0tf536g414bwjsplqh1qv2ct0] This module was still experimental in Geode 1.14, so it can be removed without breaking our API guarantees. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10278) Remove geode-for-redis module
[ https://issues.apache.org/jira/browse/GEODE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10278: --- Labels: pull-request-available (was: ) > Remove geode-for-redis module > - > > Key: GEODE-10278 > URL: https://issues.apache.org/jira/browse/GEODE-10278 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Dan Smith >Priority: Major > Labels: pull-request-available > > There is consensus to remove the geode-for-redis module from geode, based on > this discussion thread - > [https://lists.apache.org/thread/7m23h9r0tf536g414bwjsplqh1qv2ct0] > This module was still experimental in Geode 1.14, so it can be removed > without breaking our API guarantees. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10278) Remove geode-for-redis module
[ https://issues.apache.org/jira/browse/GEODE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531953#comment-17531953 ] ASF GitHub Bot commented on GEODE-10278: upthewaterspout opened a new pull request, #115: URL: https://github.com/apache/geode-examples/pull/115 This module is being removed from the geode repository, so we need to remove the corresponding example as well. > Remove geode-for-redis module > - > > Key: GEODE-10278 > URL: https://issues.apache.org/jira/browse/GEODE-10278 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Dan Smith >Priority: Major > Labels: pull-request-available > > There is consensus to remove the geode-for-redis module from geode, based on > this discussion thread - > [https://lists.apache.org/thread/7m23h9r0tf536g414bwjsplqh1qv2ct0] > This module was still experimental in Geode 1.14, so it can be removed > without breaking our API guarantees. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (GEODE-9921) Rename .NET client to .NET Framework
[ https://issues.apache.org/jira/browse/GEODE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Max Hufnagel resolved GEODE-9921. - Fix Version/s: 1.14.4 Resolution: Resolved > Rename .NET client to .NET Framework > > > Key: GEODE-9921 > URL: https://issues.apache.org/jira/browse/GEODE-9921 > Project: Geode > Issue Type: Improvement > Components: docs, native client >Affects Versions: 1.14.2 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > Fix For: 1.14.4 > > > The .NET native client docs need to be renamed to .NET Framework to clarify > that it is not .NET Core -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (GEODE-10264) Geode user guide: remove the "Connect to the server from your application" section
[ https://issues.apache.org/jira/browse/GEODE-10264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Max Hufnagel resolved GEODE-10264. -- Fix Version/s: 1.14.4 Resolution: Resolved > Geode user guide: remove the "Connect to the server from your application" > section > -- > > Key: GEODE-10264 > URL: https://issues.apache.org/jira/browse/GEODE-10264 > Project: Geode > Issue Type: Bug > Components: docs >Affects Versions: 1.14.4 >Reporter: Dave Barnes >Assignee: Max Hufnagel >Priority: Major > Labels: pull-request-available > Fix For: 1.14.4 > > > Community member John Martin reported: > We need to remove the "Connect to the server from your application" section > from this page please: > https://geode.apache.org/docs/guide/114/getting_started/intro_to_clients.html > it is not accurate, and was a section I thought I had deleted before > submitting, but apparently I had not. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10278) Remove geode-for-redis module
[ https://issues.apache.org/jira/browse/GEODE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531965#comment-17531965 ] ASF GitHub Bot commented on GEODE-10278: upthewaterspout opened a new pull request, #167: URL: https://github.com/apache/geode-benchmarks/pull/167 This module is being removed from the geode repository, so we need to remove the corresponding benchmarks as well. > Remove geode-for-redis module > - > > Key: GEODE-10278 > URL: https://issues.apache.org/jira/browse/GEODE-10278 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Dan Smith >Priority: Major > Labels: pull-request-available > > There is consensus to remove the geode-for-redis module from geode, based on > this discussion thread - > [https://lists.apache.org/thread/7m23h9r0tf536g414bwjsplqh1qv2ct0] > This module was still experimental in Geode 1.14, so it can be removed > without breaking our API guarantees. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-9390) DistributedSystem nodes is counted twice on each server member
[ https://issues.apache.org/jira/browse/GEODE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531980#comment-17531980 ] ASF subversion and git services commented on GEODE-9390: Commit 8dabaab77e6c624707716a1f24fdf072f7dd9655 in geode's branch refs/heads/develop from Jinmei Liao [ https://gitbox.apache.org/repos/asf?p=geode.git;h=8dabaab77e ] GEODE-9390: Guarding membership addition code paths to omit membership duplicates (#7639) Co-authored-by: Matthew Reddington > DistributedSystem nodes is counted twice on each server member > -- > > Key: GEODE-9390 > URL: https://issues.apache.org/jira/browse/GEODE-9390 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Barrett Oglesby >Priority: Major > Labels: pull-request-available > > Once in ClusterDistributionManager.startThreads: > {noformat} > [warn 2021/06/20 16:20:16.152 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.153 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=1 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.startThreads(ClusterDistributionManager.java:691) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.(ClusterDistributionManager.java:504) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:326) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} > And once in ClusterDistributionManager.create: > {noformat} > [warn 2021/06/20 16:20:16.155 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.156 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=2 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:354) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-9390) DistributedSystem nodes is counted twice on each server member
[ https://issues.apache.org/jira/browse/GEODE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinmei Liao updated GEODE-9390: --- Fix Version/s: 1.15.0 > DistributedSystem nodes is counted twice on each server member > -- > > Key: GEODE-9390 > URL: https://issues.apache.org/jira/browse/GEODE-9390 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Barrett Oglesby >Assignee: Matthew Reddington >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > Once in ClusterDistributionManager.startThreads: > {noformat} > [warn 2021/06/20 16:20:16.152 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.153 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=1 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.startThreads(ClusterDistributionManager.java:691) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.(ClusterDistributionManager.java:504) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:326) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} > And once in ClusterDistributionManager.create: > {noformat} > [warn 2021/06/20 16:20:16.155 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.156 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=2 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:354) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (GEODE-9390) DistributedSystem nodes is counted twice on each server member
[ https://issues.apache.org/jira/browse/GEODE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinmei Liao resolved GEODE-9390. Assignee: Matthew Reddington Resolution: Fixed > DistributedSystem nodes is counted twice on each server member > -- > > Key: GEODE-9390 > URL: https://issues.apache.org/jira/browse/GEODE-9390 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Barrett Oglesby >Assignee: Matthew Reddington >Priority: Major > Labels: pull-request-available > > Once in ClusterDistributionManager.startThreads: > {noformat} > [warn 2021/06/20 16:20:16.152 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.153 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=1 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.startThreads(ClusterDistributionManager.java:691) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.(ClusterDistributionManager.java:504) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:326) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} > And once in ClusterDistributionManager.create: > {noformat} > [warn 2021/06/20 16:20:16.155 HST server-1 tid=0x1] > ClusterDistributionManager.handleManagerStartup > id=192.168.1.8(server-1:58386):41001; kind=10 > [warn 2021/06/20 16:20:16.156 HST server-1 tid=0x1] > DistributionStats.incNodes nodes=2 > java.lang.Exception > at > org.apache.geode.distributed.internal.DistributionStats.incNodes(DistributionStats.java:1362) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.handleManagerStartup(ClusterDistributionManager.java:1809) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.addNewMember(ClusterDistributionManager.java:1062) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:354) > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:780) > {noformat} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (GEODE-10279) Need to lock RVV and flush before backup
[ https://issues.apache.org/jira/browse/GEODE-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaojian Zhou reassigned GEODE-10279: - Assignee: Xiaojian Zhou > Need to lock RVV and flush before backup > > > Key: GEODE-10279 > URL: https://issues.apache.org/jira/browse/GEODE-10279 > Project: Geode > Issue Type: Bug >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Labels: needsTriage > > When using async disk writer, in memory RVV has contained all the operations > in async queue. The items in the async queue might not have completely > flushed to disk. So RVV mismatch with the entries' status. > When restored and GII, since RVVs are the same, no GII will be triggered. > Thus the data mismatched in different members. > To fix it, introduce a step to lock rvvs for all the regions of all the > diskstores that will be backup. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (GEODE-10279) Need to lock RVV and flush before backup
Xiaojian Zhou created GEODE-10279: - Summary: Need to lock RVV and flush before backup Key: GEODE-10279 URL: https://issues.apache.org/jira/browse/GEODE-10279 Project: Geode Issue Type: Bug Reporter: Xiaojian Zhou When using async disk writer, in memory RVV has contained all the operations in async queue. The items in the async queue might not have completely flushed to disk. So RVV mismatch with the entries' status. When restored and GII, since RVVs are the same, no GII will be triggered. Thus the data mismatched in different members. To fix it, introduce a step to lock rvvs for all the regions of all the diskstores that will be backup. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10279) Need to lock RVV and flush before backup
[ https://issues.apache.org/jira/browse/GEODE-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10279: -- Labels: needsTriage (was: ) > Need to lock RVV and flush before backup > > > Key: GEODE-10279 > URL: https://issues.apache.org/jira/browse/GEODE-10279 > Project: Geode > Issue Type: Bug >Reporter: Xiaojian Zhou >Priority: Major > Labels: needsTriage > > When using async disk writer, in memory RVV has contained all the operations > in async queue. The items in the async queue might not have completely > flushed to disk. So RVV mismatch with the entries' status. > When restored and GII, since RVVs are the same, no GII will be triggered. > Thus the data mismatched in different members. > To fix it, introduce a step to lock rvvs for all the regions of all the > diskstores that will be backup. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-10279) Need to lock RVV and flush before backup
[ https://issues.apache.org/jira/browse/GEODE-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531984#comment-17531984 ] ASF subversion and git services commented on GEODE-10279: - Commit b617db87a97516972508504ffa5b662a83bd78a8 in geode's branch refs/heads/feature/GEODE-10279 from zhouxh [ https://gitbox.apache.org/repos/asf?p=geode.git;h=b617db87a9 ] GEODE-10279: Need to lock RVV and flush before backup > Need to lock RVV and flush before backup > > > Key: GEODE-10279 > URL: https://issues.apache.org/jira/browse/GEODE-10279 > Project: Geode > Issue Type: Bug >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Labels: needsTriage > > When using async disk writer, in memory RVV has contained all the operations > in async queue. The items in the async queue might not have completely > flushed to disk. So RVV mismatch with the entries' status. > When restored and GII, since RVVs are the same, no GII will be triggered. > Thus the data mismatched in different members. > To fix it, introduce a step to lock rvvs for all the regions of all the > diskstores that will be backup. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (GEODE-10279) Need to lock RVV and flush before backup
[ https://issues.apache.org/jira/browse/GEODE-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10279: --- Labels: needsTriage pull-request-available (was: needsTriage) > Need to lock RVV and flush before backup > > > Key: GEODE-10279 > URL: https://issues.apache.org/jira/browse/GEODE-10279 > Project: Geode > Issue Type: Bug >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Labels: needsTriage, pull-request-available > > When using async disk writer, in memory RVV has contained all the operations > in async queue. The items in the async queue might not have completely > flushed to disk. So RVV mismatch with the entries' status. > When restored and GII, since RVVs are the same, no GII will be triggered. > Thus the data mismatched in different members. > To fix it, introduce a step to lock rvvs for all the regions of all the > diskstores that will be backup. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17532066#comment-17532066 ] Ivan Godwin commented on GEODE-9484: Noting possible issue found during internal testing. Will analyze further and update. {code:java} [fatal 2022/05/04 18:27:45.084 GMT gemfire-cluster-server-3 tid=0x6c] While pushing message :41000; callbackArg=null; processorId=0; op=CREATE; applied=false; directAck=true; posdup=false; hasDelta=false; hasOldValue=false; version={v1; rv1; time=1651688854339} FilterRoutingInfo(remote={gemfire-cluster-server-0(gemfire-cluster-server-0:1):41000=}); lastModified=1651688854339; key=0; newValue=(5 bytes); eventId=EventID[id=58 bytes;threadID=853971;sequenceID=0]; deserializationPolicy=LAZY; context=identity(gemfire-clients-564c765b59-vg4wb(SpringBasedClientCacheApplication:1:loner):48828:dd7d5290:SpringBasedClientCacheApplication,connection=1)> to recipients: :41000> java.lang.NullPointerException at org.apache.geode.internal.tcp.TCPConduit.getFirstScanForConnection(TCPConduit.java:958) at org.apache.geode.distributed.internal.direct.DirectChannel.getConnections(DirectChannel.java:477) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToMany(DirectChannel.java:277) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToOne(DirectChannel.java:186) at org.apache.geode.distributed.internal.direct.DirectChannel.send(DirectChannel.java:541) at org.apache.geode.distributed.internal.DistributionImpl.directChannelSend(DistributionImpl.java:348) at org.apache.geode.distributed.internal.DistributionImpl.send(DistributionImpl.java:293) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendViaMembershipManager(ClusterDistributionManager.java:2067) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendOutgoing(ClusterDistributionManager.java:1994) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendMessage(ClusterDistributionManager.java:2031) at org.apache.geode.distributed.internal.ClusterDistributionManager.putOutgoing(ClusterDistributionManager.java:1088) at org.apache.geode.internal.cache.DistributedCacheOperation._distribute(DistributedCacheOperation.java:556) at org.apache.geode.internal.cache.DistributedCacheOperation.startOperation(DistributedCacheOperation.java:267) at org.apache.geode.internal.cache.BucketRegion.basicPutPart2(BucketRegion.java:715) at org.apache.geode.internal.cache.map.RegionMapPut.doBeforeCompletionActions(RegionMapPut.java:282) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutAndDeliverEvent(AbstractRegionMapPut.java:301) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWithIndexUpdatingInProgress(AbstractRegionMapPut.java:308) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutIfPreconditionsSatisified(AbstractRegionMapPut.java:296) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnSynchronizedRegionEntry(AbstractRegionMapPut.java:282) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnRegionEntryInMap(AbstractRegionMapPut.java:273) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.addRegionEntryToMapAndDoPut(AbstractRegionMapPut.java:251) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutRetryingIfNeeded(AbstractRegionMapPut.java:216) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doWithIndexInUpdateMode(AbstractRegionMapPut.java:198) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:180) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119) at org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:161) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:169) at org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2016) at org.apache.geode.internal.cache.BucketRegion.virtualPut(BucketRegion.java:544) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5635) at org.apache.geode.internal.cache.PartitionedRegionDataStore.putLocally(PartitionedRegionDataStore.java:1193) at org.apache.geode.internal.cache.PartitionedRegion.putInBucket(PartitionedRegion.java:3033) at org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:2248) at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:171) at org.apache.geode.internal.cache.LocalRegion.basicUpdate(LocalRegion.java:5628) at org.apache.geode.internal.cache.LocalRegion.basicUpdate(LocalRegion.java:5588) at org.apache.geode.internal.cache.LocalRegion.basicBridgePut(LocalRegion.java:5259)
[jira] [Comment Edited] (GEODE-9484) Data inconsistency in replicated region with 3 or more servers, and one server is down
[ https://issues.apache.org/jira/browse/GEODE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17532066#comment-17532066 ] Ivan Godwin edited comment on GEODE-9484 at 5/5/22 5:14 AM: Noting possible issue found during internal testing. Will analyze further and update. {code:java} [fatal 2022/05/04 18:27:45.084 GMT tid=0x6c] While pushing message (:1):41000; callbackArg=null; processorId=0; op=CREATE; applied=false; directAck=true; posdup=false; hasDelta=false; hasOldValue=false; version={v1; rv1; time=1651688854339} FilterRoutingInfo(remote={(:1):41000=}); lastModified=1651688854339; key=0; newValue=(5 bytes); eventId=EventID[id=58 bytes;threadID=853971;sequenceID=0]; deserializationPolicy=LAZY; context=identity((SpringBasedClientCacheApplication:1:loner):48828:dd7d5290:SpringBasedClientCacheApplication,connection=1)> to recipients: <(:1):41000> java.lang.NullPointerException at org.apache.geode.internal.tcp.TCPConduit.getFirstScanForConnection(TCPConduit.java:958) at org.apache.geode.distributed.internal.direct.DirectChannel.getConnections(DirectChannel.java:477) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToMany(DirectChannel.java:277) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToOne(DirectChannel.java:186) at org.apache.geode.distributed.internal.direct.DirectChannel.send(DirectChannel.java:541) at org.apache.geode.distributed.internal.DistributionImpl.directChannelSend(DistributionImpl.java:348) at org.apache.geode.distributed.internal.DistributionImpl.send(DistributionImpl.java:293) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendViaMembershipManager(ClusterDistributionManager.java:2067) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendOutgoing(ClusterDistributionManager.java:1994) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendMessage(ClusterDistributionManager.java:2031) at org.apache.geode.distributed.internal.ClusterDistributionManager.putOutgoing(ClusterDistributionManager.java:1088) at org.apache.geode.internal.cache.DistributedCacheOperation._distribute(DistributedCacheOperation.java:556) at org.apache.geode.internal.cache.DistributedCacheOperation.startOperation(DistributedCacheOperation.java:267) at org.apache.geode.internal.cache.BucketRegion.basicPutPart2(BucketRegion.java:715) at org.apache.geode.internal.cache.map.RegionMapPut.doBeforeCompletionActions(RegionMapPut.java:282) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutAndDeliverEvent(AbstractRegionMapPut.java:301) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWithIndexUpdatingInProgress(AbstractRegionMapPut.java:308) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutIfPreconditionsSatisified(AbstractRegionMapPut.java:296) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnSynchronizedRegionEntry(AbstractRegionMapPut.java:282) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnRegionEntryInMap(AbstractRegionMapPut.java:273) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.addRegionEntryToMapAndDoPut(AbstractRegionMapPut.java:251) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutRetryingIfNeeded(AbstractRegionMapPut.java:216) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doWithIndexInUpdateMode(AbstractRegionMapPut.java:198) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:180) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119) at org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:161) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:169) at org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2016) at org.apache.geode.internal.cache.BucketRegion.virtualPut(BucketRegion.java:544) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5635) at org.apache.geode.internal.cache.PartitionedRegionDataStore.putLocally(PartitionedRegionDataStore.java:1193) at org.apache.geode.internal.cache.PartitionedRegion.putInBucket(PartitionedRegion.java:3033) at org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:2248) at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:171) at org.apache.geode.internal.cache.LocalRegion.basicUpdate(LocalRegion.java:5628) at org.apache.geode.internal.cache.LocalRegion.basicUpdate(LocalRegion.java:5588) at org.apache.geode.internal.cache.LocalRegion.basicBridgePut(LocalRegion.java:5259) at org.apache.geode.internal.cache.tier.sock