[ 
https://issues.apache.org/jira/browse/GEODE-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17244358#comment-17244358
 ] 

ASF subversion and git services commented on GEODE-8745:
--------------------------------------------------------

Commit 5d6dba720a85fedbe3ddc1747dd59ba6214b76cc in geode's branch 
refs/heads/develop from Nabarun Nag
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=5d6dba7 ]

GEODE-8745: locking maps, to prevent concurrent use by Listeners (#5815)

   * Also re-checking the isPrimary status, as after the first check, 
handleFailover may take the lock and clean null out the maps
   * This may cause NPEs and checking the status after the lock is acquired.

> Closing the region backing the queue when the serial gateway sender is 
> stopped.
> -------------------------------------------------------------------------------
>
>                 Key: GEODE-8745
>                 URL: https://issues.apache.org/jira/browse/GEODE-8745
>             Project: Geode
>          Issue Type: Task
>          Components: wan
>            Reporter: Nabarun Nag
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.14.0
>
>
> In the commit for GEODE-7458, when the sender is stopped, the region backing 
> the queues are no more closed, but just remove the cache listeners.
> This is causing a problem, as the regions continue to exist, it keeps on 
> storing entry events and hence the queue size never gets to zero.
> Also, as the region exists but before attaching the cache listener when 
> restarting the sender leads to entries being never removed from the 
> unprocessed event map.
>  
> As mention in the PR for GEODE-7458 - "This option is only applicable for 
> Gateway Senders with enabled persistence."
> Hence believe that it is ok to close the region as the disk files will still 
> be maintained. so when we restart the values can be obtained back from the 
> disk stores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to