[
https://issues.apache.org/jira/browse/GEODE-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Donal Evans resolved GEODE-8620.
--------------------------------
Fix Version/s: 1.13.1
1.14.0
Resolution: Fixed
> Actual redundancy of -1 in restore redundancy result
> ----------------------------------------------------
>
> Key: GEODE-8620
> URL: https://issues.apache.org/jira/browse/GEODE-8620
> Project: Geode
> Issue Type: Bug
> Components: gfsh, management
> Affects Versions: 1.13.0
> Reporter: Aaron Lindsey
> Assignee: Donal Evans
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.14.0, 1.13.1
>
>
> Steps to reproduce:
> # Create a geode cluster with 1 locator and 2 servers.
> # Create a region of type PARTITION_REDUNDANT.
> # Put an entry into the region.
> # Trigger a restore redundancy operation via the management REST API or gfsh.
> # The result from the restore redundancy operation states that the actual
> redundancy for the region is -1. However, the expected redundancy at this
> point is 1 because there should be enough cache servers in the cluster to
> hold the redundant copy.
> # Stop one of the servers.
> # Trigger another restore redundancy operation via the management REST API
> or gfsh.
> # The result from the second restore redundancy operation again states that
> the actual redundancy for the region is -1. However, the region should be
> counted as having zero redundant copies at this point because there is only
> one cache server.
> I encountered this issue while using the management REST API, although the
> same issue happens in the gfsh command. I assume fixing the gfsh command
> would also fix the management REST API. If not, I can break this out into two
> separate JIRAs.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)