: Noble: This new test has been failing ~50% of all jenkins builds since it 
: was added.

I *THINK* the problem is that (like most tests) this test class uses SSL 
randomization on solr, but testConfigset() assumes it can open a 
raw URLConnection w/o any knowledge of the certificates in use.

We have other tests that look at HTTP headers -- like CacheHeaderTest -- 
and the way they work is by using HttpSolrClient.getHttpClient() (which 
should have all the SSL test context configured on them IIUC)



org.apache.solr.search.TestCoordinatorRole > testConfigset FAILED
    javax.net.ssl.SSLHandshakeException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification 
path to requested target
        at 
__randomizedtesting.SeedInfo.seed([8D2A8B2582BA7A2D:9E6C35FFB67C7C05]:0)
        at 
java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:371)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:314)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:309)
        at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1357)
        at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
        at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
        at 
java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396)
        at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480)
        at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:458)
        at 
java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:201)
        at 
java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at 
java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1505)
        at 
java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1420)
        at 
java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:455)
        at 
java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:426)
        at 
java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:580)
        at 
java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:187)
        at 
java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:142)
        at 
org.apache.solr.search.TestCoordinatorRole.testConfigset(TestCoordinatorRole.java:626)


: 
: 
: : Date: Mon, 18 Sep 2023 19:41:15 +0000
: : From: no...@apache.org
: : Reply-To: dev@solr.apache.org
: : To: "comm...@solr.apache.org" <comm...@solr.apache.org>
: : Subject: [solr] branch main updated: More test cases for Coordinator node 
role
: :      (#1782)
: : 
: : This is an automated email from the ASF dual-hosted git repository.
: : 
: : noble pushed a commit to branch main
: : in repository https://gitbox.apache.org/repos/asf/solr.git
: : 
: : 
: : The following commit(s) were added to refs/heads/main by this push:
: :      new b33dd14b60b More test cases for Coordinator node role (#1782)
: : b33dd14b60b is described below
: : 
: : commit b33dd14b60b237980044d406dc7911f20c605530
: : Author: patsonluk <patson...@users.noreply.github.com>
: : AuthorDate: Mon Sep 18 12:41:08 2023 -0700
: : 
: :     More test cases for Coordinator node role (#1782)
: : ---
: :  .../solr/configsets/cache-control/conf/schema.xml  |  27 +++
: :  .../configsets/cache-control/conf/solrconfig.xml   |  54 +++++
: :  .../apache/solr/search/TestCoordinatorRole.java    | 260 
+++++++++++++++++++--
: :  3 files changed, 324 insertions(+), 17 deletions(-)
: : 
: : diff --git 
a/solr/core/src/test-files/solr/configsets/cache-control/conf/schema.xml 
b/solr/core/src/test-files/solr/configsets/cache-control/conf/schema.xml
: : new file mode 100644
: : index 00000000000..36d5cfd2588
: : --- /dev/null
: : +++ b/solr/core/src/test-files/solr/configsets/cache-control/conf/schema.xml
: : @@ -0,0 +1,27 @@
: : +<?xml version="1.0" encoding="UTF-8" ?>
: : +<!--
: : + Licensed to the Apache Software Foundation (ASF) under one or more
: : + contributor license agreements.  See the NOTICE file distributed with
: : + this work for additional information regarding copyright ownership.
: : + The ASF licenses this file to You under the Apache License, Version 2.0
: : + (the "License"); you may not use this file except in compliance with
: : + the License.  You may obtain a copy of the License at
: : +     http://www.apache.org/licenses/LICENSE-2.0
: : + Unless required by applicable law or agreed to in writing, software
: : + distributed under the License is distributed on an "AS IS" BASIS,
: : + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
: : + See the License for the specific language governing permissions and
: : + limitations under the License.
: : +-->
: : +<schema name="minimal" version="1.1">
: : +    <fieldType name="string" class="solr.StrField"/>
: : +    <fieldType name="int" class="${solr.tests.IntegerFieldType}" 
docValues="${solr.tests.numeric.dv}" precisionStep="0" omitNorms="true" 
positionIncrementGap="0"/>
: : +    <fieldType name="long" class="${solr.tests.LongFieldType}" 
docValues="${solr.tests.numeric.dv}" precisionStep="0" omitNorms="true" 
positionIncrementGap="0"/>
: : +    <dynamicField name="*" type="string" indexed="true" stored="true"/>
: : +    <!-- for versioning -->
: : +    <field name="_version_" type="long" indexed="true" stored="true"/>
: : +    <field name="_root_" type="string" indexed="true" stored="true" 
multiValued="false" required="false"/>
: : +    <field name="id" type="string" indexed="true" stored="true"/>
: : +    <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" 
/>
: : +    <uniqueKey>id</uniqueKey>
: : +</schema>
: : \ No newline at end of file
: : diff --git 
a/solr/core/src/test-files/solr/configsets/cache-control/conf/solrconfig.xml 
b/solr/core/src/test-files/solr/configsets/cache-control/conf/solrconfig.xml
: : new file mode 100644
: : index 00000000000..bd27a88952a
: : --- /dev/null
: : +++ 
b/solr/core/src/test-files/solr/configsets/cache-control/conf/solrconfig.xml
: : @@ -0,0 +1,54 @@
: : +<?xml version="1.0" ?>
: : +
: : +<!--
: : + Licensed to the Apache Software Foundation (ASF) under one or more
: : + contributor license agreements.  See the NOTICE file distributed with
: : + this work for additional information regarding copyright ownership.
: : + The ASF licenses this file to You under the Apache License, Version 2.0
: : + (the "License"); you may not use this file except in compliance with
: : + the License.  You may obtain a copy of the License at
: : +     http://www.apache.org/licenses/LICENSE-2.0
: : + Unless required by applicable law or agreed to in writing, software
: : + distributed under the License is distributed on an "AS IS" BASIS,
: : + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
: : + See the License for the specific language governing permissions and
: : + limitations under the License.
: : +-->
: : +
: : +<!-- Minimal solrconfig.xml with /select, /admin and /update only -->
: : +
: : +<config>
: : +
: : +    <dataDir>${solr.data.dir:}</dataDir>
: : +
: : +    <directoryFactory name="DirectoryFactory"
: : +                      
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
: : +    <schemaFactory class="ClassicIndexSchemaFactory"/>
: : +
: : +    
<luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
: : +
: : +    <updateHandler class="solr.DirectUpdateHandler2">
: : +        <commitWithin>
: : +            <softCommit>${solr.commitwithin.softcommit:true}</softCommit>
: : +        </commitWithin>
: : +        <updateLog class="${solr.ulog:solr.UpdateLog}"></updateLog>
: : +    </updateHandler>
: : +
: : +    <requestDispatcher>
: : +        <httpCaching>
: : +            <cacheControl>max-age=30, public</cacheControl>
: : +        </httpCaching>
: : +    </requestDispatcher>
: : +
: : +    <requestHandler name="/select" class="solr.SearchHandler">
: : +        <lst name="defaults">
: : +            <str name="echoParams">explicit</str>
: : +            <str name="indent">true</str>
: : +            <str name="df">text</str>
: : +        </lst>
: : +
: : +    </requestHandler>
: : +    <indexConfig>
: : +        <mergeScheduler 
class="${solr.mscheduler:org.apache.lucene.index.ConcurrentMergeScheduler}"/>
: : +        :  </indexConfig>
: : +</config>
: : \ No newline at end of file
: : diff --git 
a/solr/core/src/test/org/apache/solr/search/TestCoordinatorRole.java 
b/solr/core/src/test/org/apache/solr/search/TestCoordinatorRole.java
: : index 538c6b44703..581f048785d 100644
: : --- a/solr/core/src/test/org/apache/solr/search/TestCoordinatorRole.java
: : +++ b/solr/core/src/test/org/apache/solr/search/TestCoordinatorRole.java
: : @@ -21,6 +21,8 @@ import static 
org.apache.solr.common.params.CommonParams.OMIT_HEADER;
: :  import static org.apache.solr.common.params.CommonParams.TRUE;
: :  
: :  import java.lang.invoke.MethodHandles;
: : +import java.net.HttpURLConnection;
: : +import java.net.URL;
: :  import java.util.ArrayList;
: :  import java.util.Date;
: :  import java.util.EnumSet;
: : @@ -51,6 +53,7 @@ import org.apache.solr.common.SolrException;
: :  import org.apache.solr.common.SolrInputDocument;
: :  import org.apache.solr.common.cloud.DocCollection;
: :  import org.apache.solr.common.cloud.Replica;
: : +import org.apache.solr.common.cloud.Slice;
: :  import org.apache.solr.common.cloud.ZkStateReader;
: :  import org.apache.solr.common.cloud.ZkStateReaderAccessor;
: :  import org.apache.solr.common.params.CommonParams;
: : @@ -585,18 +588,70 @@ public class TestCoordinatorRole extends 
SolrCloudTestCase {
: :      }
: :    }
: :  
: : +  public void testConfigset() throws Exception {
: : +    final int DATA_NODE_COUNT = 1;
: : +    MiniSolrCloudCluster cluster =
: : +        configureCluster(DATA_NODE_COUNT)
: : +            .addConfig("conf1", configset("cloud-minimal"))
: : +            .addConfig("conf2", configset("cache-control"))
: : +            .configure();
: : +
: : +    List<String> dataNodes =
: : +        cluster.getJettySolrRunners().stream()
: : +            .map(JettySolrRunner::getNodeName)
: : +            .collect(Collectors.toUnmodifiableList());
: : +
: : +    try {
: : +      CollectionAdminRequest.createCollection("c1", "conf1", 2, 
1).process(cluster.getSolrClient());
: : +      cluster.waitForActiveCollection("c1", 2, 2);
: : +      CollectionAdminRequest.createCollection("c2", "conf2", 2, 
1).process(cluster.getSolrClient());
: : +      cluster.waitForActiveCollection("c2", 2, 2);
: : +
: : +      System.setProperty(NodeRoles.NODE_ROLES_PROP, "coordinator:on");
: : +      JettySolrRunner coordinatorJetty;
: : +      try {
: : +        coordinatorJetty = cluster.startJettySolrRunner();
: : +      } finally {
: : +        System.clearProperty(NodeRoles.NODE_ROLES_PROP);
: : +      }
: : +
: : +      // Tricky to test configset, since operation such as collection 
status would direct it to the
: : +      // OS node.
: : +      // So we use query and check the cache response header which is 
determined by the
: : +      // solr-config.xml in the configset
: : +      // However using solr client would drop cache respons header hence 
we need to use plain java
: : +      // HttpURLConnection
: : +      URL url = new URL(coordinatorJetty.getBaseUrl() + 
"/c1/select?q=*:*");
: : +      HttpURLConnection urlConnection = (HttpURLConnection) 
url.openConnection();
: : +      urlConnection.connect();
: : +
: : +      // conf1 has no cache-control
: : +      assertNull(urlConnection.getHeaderField("cache-control"));
: : +
: : +      url = new URL(coordinatorJetty.getBaseUrl() + "/c2/select?q=*:*");
: : +      urlConnection = (HttpURLConnection) url.openConnection();
: : +      urlConnection.connect();
: : +
: : +      // conf2 has cache-control defined
: : +      
assertTrue(urlConnection.getHeaderField("cache-control").contains("max-age=30"));
: : +    } finally {
: : +      cluster.shutdown();
: : +    }
: : +  }
: : +
: :    public void testWatch() throws Exception {
: : -    final int DATA_NODE_COUNT = 2;
: : +    final int DATA_NODE_COUNT = 1;
: :      MiniSolrCloudCluster cluster =
: :          configureCluster(DATA_NODE_COUNT)
: :              .addConfig("conf1", configset("cloud-minimal"))
: :              .configure();
: : -    final String TEST_COLLECTION = "c1";
: : +    final String TEST_COLLECTION_1 = "c1";
: : +    final String TEST_COLLECTION_2 = "c2";
: :  
: :      try {
: :        CloudSolrClient client = cluster.getSolrClient();
: : -      CollectionAdminRequest.createCollection(TEST_COLLECTION, "conf1", 1, 
2).process(client);
: : -      cluster.waitForActiveCollection(TEST_COLLECTION, 1, 2);
: : +      CollectionAdminRequest.createCollection(TEST_COLLECTION_1, "conf1", 
1, 2).process(client);
: : +      cluster.waitForActiveCollection(TEST_COLLECTION_1, 1, 2);
: :        System.setProperty(NodeRoles.NODE_ROLES_PROP, "coordinator:on");
: :        JettySolrRunner coordinatorJetty;
: :        try {
: : @@ -610,26 +665,37 @@ public class TestCoordinatorRole extends 
SolrCloudTestCase {
: :        ZkStateReaderAccessor zkWatchAccessor = new 
ZkStateReaderAccessor(zkStateReader);
: :  
: :        // no watch at first
: : -      
assertTrue(!zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION));
: : +      
assertTrue(!zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_1));
: :        new QueryRequest(new SolrQuery("*:*"))
: :            .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : -          .process(client, TEST_COLLECTION); // ok no exception thrown
: : +          .process(client, TEST_COLLECTION_1); // ok no exception thrown
: :  
: :        // now it should be watching it after the query
: : -      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION));
: : +      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_1));
: : +
: : +      // add another collection
: : +      CollectionAdminRequest.createCollection(TEST_COLLECTION_2, "conf1", 
1, 2).process(client);
: : +      cluster.waitForActiveCollection(TEST_COLLECTION_2, 1, 2);
: : +      new QueryRequest(new SolrQuery("*:*"))
: : +          .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : +          .process(client, TEST_COLLECTION_2);
: : +      // watch both collections
: : +      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_1));
: : +      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_2));
: :  
: : -      CollectionAdminRequest.deleteReplica(TEST_COLLECTION, "shard1", 
1).process(client);
: : -      cluster.waitForActiveCollection(TEST_COLLECTION, 1, 1);
: : +      CollectionAdminRequest.deleteReplica(TEST_COLLECTION_1, "shard1", 
1).process(client);
: : +      cluster.waitForActiveCollection(TEST_COLLECTION_1, 1, 1);
: :        new QueryRequest(new SolrQuery("*:*"))
: :            .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : -          .process(client, TEST_COLLECTION); // ok no exception thrown
: : +          .process(client, TEST_COLLECTION_1); // ok no exception thrown
: :  
: :        // still one replica left, should not remove the watch
: : -      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION));
: : +      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_1));
: :  
: : -      
CollectionAdminRequest.deleteCollection(TEST_COLLECTION).process(client);
: : -      zkStateReader.waitForState(TEST_COLLECTION, 30, TimeUnit.SECONDS, 
Objects::isNull);
: : -      assertNull(zkStateReader.getCollection(TEST_COLLECTION)); // check 
the cluster state
: : +      // now delete c1 and ensure it's cleared from various logic
: : +      
CollectionAdminRequest.deleteCollection(TEST_COLLECTION_1).process(client);
: : +      zkStateReader.waitForState(TEST_COLLECTION_1, 30, TimeUnit.SECONDS, 
Objects::isNull);
: : +      assertNull(zkStateReader.getCollection(TEST_COLLECTION_1)); // check 
the cluster state
: :  
: :        // ensure querying throws exception
: :        assertExceptionThrownWithMessageContaining(
: : @@ -638,10 +704,170 @@ public class TestCoordinatorRole extends 
SolrCloudTestCase {
: :            () ->
: :                new QueryRequest(new SolrQuery("*:*"))
: :                    
.setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : -                  .process(client, TEST_COLLECTION));
: : +                  .process(client, TEST_COLLECTION_1));
: : +
: : +      // watch should be removed after c1 deletion
: : +      
assertTrue(!zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_1));
: : +      // still watching c2
: : +      
assertTrue(zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION_2));
: : +    } finally {
: : +      cluster.shutdown();
: : +    }
: : +  }
: : +
: : +  public void testSplitShard() throws Exception {
: : +    final int DATA_NODE_COUNT = 1;
: : +    MiniSolrCloudCluster cluster =
: : +        configureCluster(DATA_NODE_COUNT)
: : +            .addConfig("conf1", configset("cloud-minimal"))
: : +            .configure();
: : +
: : +    try {
: : +
: : +      final String COLLECTION_NAME = "c1";
: : +      CollectionAdminRequest.createCollection(COLLECTION_NAME, "conf1", 1, 
1)
: : +          .process(cluster.getSolrClient());
: : +      cluster.waitForActiveCollection(COLLECTION_NAME, 1, 1);
: : +
: : +      int DOC_PER_COLLECTION_COUNT = 1000;
: : +      UpdateRequest ur = new UpdateRequest();
: : +      for (int i = 0; i < DOC_PER_COLLECTION_COUNT; i++) {
: : +        SolrInputDocument doc = new SolrInputDocument();
: : +        doc.addField("id", COLLECTION_NAME + "-" + i);
: : +        ur.add(doc);
: : +      }
: : +      CloudSolrClient client = cluster.getSolrClient();
: : +      ur.commit(client, COLLECTION_NAME);
: : +
: : +      System.setProperty(NodeRoles.NODE_ROLES_PROP, "coordinator:on");
: : +      JettySolrRunner coordinatorJetty;
: : +      try {
: : +        coordinatorJetty = cluster.startJettySolrRunner();
: : +      } finally {
: : +        System.clearProperty(NodeRoles.NODE_ROLES_PROP);
: : +      }
: : +
: : +      QueryResponse response =
: : +          new QueryRequest(new SolrQuery("*:*"))
: : +              .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : +              .process(client, COLLECTION_NAME);
: : +
: : +      assertEquals(DOC_PER_COLLECTION_COUNT, 
response.getResults().getNumFound());
: : +
: : +      // now split the shard
: : +      
CollectionAdminRequest.splitShard(COLLECTION_NAME).setShardName("shard1").process(client);
: : +      waitForState(
: : +          "Failed to wait for child shards after split",
: : +          COLLECTION_NAME,
: : +          (liveNodes, collectionState) ->
: : +              collectionState.getSlice("shard1_0") != null
: : +                  && collectionState.getSlice("shard1_0").getState() == 
Slice.State.ACTIVE
: : +                  && collectionState.getSlice("shard1_1") != null
: : +                  && collectionState.getSlice("shard1_1").getState() == 
Slice.State.ACTIVE);
: : +
: : +      // delete the parent shard
: : +      CollectionAdminRequest.deleteShard(COLLECTION_NAME, 
"shard1").process(client);
: : +      waitForState(
: : +          "Parent shard is not yet deleted after split",
: : +          COLLECTION_NAME,
: : +          (liveNodes, collectionState) -> 
collectionState.getSlice("shard1") == null);
: : +
: : +      response =
: : +          new QueryRequest(new SolrQuery("*:*"))
: : +              .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : +              .process(client, COLLECTION_NAME);
: : +
: : +      assertEquals(DOC_PER_COLLECTION_COUNT, 
response.getResults().getNumFound());
: : +    } finally {
: : +      cluster.shutdown();
: : +    }
: : +  }
: : +
: : +  public void testMoveReplica() throws Exception {
: : +    final int DATA_NODE_COUNT = 2;
: : +    MiniSolrCloudCluster cluster =
: : +        configureCluster(DATA_NODE_COUNT)
: : +            .addConfig("conf1", configset("cloud-minimal"))
: : +            .configure();
: : +
: : +    List<String> dataNodes =
: : +        cluster.getJettySolrRunners().stream()
: : +            .map(JettySolrRunner::getNodeName)
: : +            .collect(Collectors.toUnmodifiableList());
: : +    try {
: : +
: : +      final String COLLECTION_NAME = "c1";
: : +      String fromNode = dataNodes.get(0); // put the shard on first data 
node
: : +      CollectionAdminRequest.createCollection(COLLECTION_NAME, "conf1", 1, 
1)
: : +          .setCreateNodeSet(fromNode)
: : +          .process(cluster.getSolrClient());
: : +      // ensure replica is placed on the expected node
: : +      waitForState(
: : +          "Cannot find replica on first node yet",
: : +          COLLECTION_NAME,
: : +          (liveNodes, collectionState) -> {
: : +            if (collectionState.getReplicas().size() == 1) {
: : +              Replica replica = collectionState.getReplicas().get(0);
: : +              return fromNode.equals(replica.getNodeName())
: : +                  && replica.getState() == Replica.State.ACTIVE;
: : +            }
: : +            return false;
: : +          });
: : +
: : +      int DOC_PER_COLLECTION_COUNT = 1000;
: : +      UpdateRequest ur = new UpdateRequest();
: : +      for (int i = 0; i < DOC_PER_COLLECTION_COUNT; i++) {
: : +        SolrInputDocument doc = new SolrInputDocument();
: : +        doc.addField("id", COLLECTION_NAME + "-" + i);
: : +        ur.add(doc);
: : +      }
: : +      CloudSolrClient client = cluster.getSolrClient();
: : +      ur.commit(client, COLLECTION_NAME);
: : +
: : +      System.setProperty(NodeRoles.NODE_ROLES_PROP, "coordinator:on");
: : +      JettySolrRunner coordinatorJetty;
: : +      try {
: : +        coordinatorJetty = cluster.startJettySolrRunner();
: : +      } finally {
: : +        System.clearProperty(NodeRoles.NODE_ROLES_PROP);
: : +      }
: : +
: : +      QueryResponse response =
: : +          new QueryRequest(new SolrQuery("*:*"))
: : +              .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : +              .process(client, COLLECTION_NAME);
: : +
: : +      assertEquals(DOC_PER_COLLECTION_COUNT, 
response.getResults().getNumFound());
: : +
: : +      // now move the shard/replica
: : +      String replicaName = 
getCollectionState(COLLECTION_NAME).getReplicas().get(0).getName();
: : +      String toNodeName = dataNodes.get(1);
: : +      CollectionAdminRequest.moveReplica(COLLECTION_NAME, replicaName, 
toNodeName).process(client);
: : +      waitForState(
: : +          "Cannot find replica on second node yet after repliac move",
: : +          COLLECTION_NAME,
: : +          (liveNodes, collectionState) -> {
: : +            if (collectionState.getReplicas().size() == 1) {
: : +              Replica replica = collectionState.getReplicas().get(0);
: : +              return toNodeName.equals(replica.getNodeName())
: : +                  && replica.getState() == Replica.State.ACTIVE;
: : +            }
: : +            return false;
: : +          });
: : +
: : +      // We must stop the first node to ensure that query directs to the 
correct node from
: : +      // coordinator.
: : +      // In case if coordinator node has the wrong info (replica on first 
node), it might still
: : +      // return valid result if
: : +      // we do not stop the first node as first node might forward the 
query to second node.
: : +      cluster.getJettySolrRunners().get(0).stop();
: : +
: : +      response =
: : +          new QueryRequest(new SolrQuery("*:*"))
: : +              .setPreferredNodes(List.of(coordinatorJetty.getNodeName()))
: : +              .process(client, COLLECTION_NAME);
: :  
: : -      // watch should be removed after collection deletion
: : -      
assertTrue(!zkWatchAccessor.getWatchedCollections().contains(TEST_COLLECTION));
: : +      assertEquals(DOC_PER_COLLECTION_COUNT, 
response.getResults().getNumFound());
: :      } finally {
: :        cluster.shutdown();
: :      }
: : 
: : 
: 
: -Hoss
: http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
For additional commands, e-mail: dev-h...@solr.apache.org

Reply via email to