[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
noblepaul commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466858651 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java ## @@ -478,4 +479,11 @@ public Builder getThis() { return this; } } + + private final RpcFactory factory = null;//TODO Review comment: No this is not for commit This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
noblepaul commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466859745 ## File path: solr/solrj/src/java/org/apache/solr/common/util/RpcFactory.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +import org.apache.solr.common.SolrException; +import org.apache.solr.common.params.CommonParams; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.function.Function; + +/**A factory that creates any type of RPC calls in Solr + * This is designed to create low level access to the RPC mechanism. + * This is agnostic of Solr documents or other internal concepts of Solr + * But it knows certain things + * a) how to locate a Solr core/replica + * b) basic HTTP access, + * c) serialization/deserialization is the responsibility of the code that is making a request + * + */ +public interface RpcFactory { + +CallRouter createCallRouter(); + +HttpRpc createHttpRpc(); + + +interface ResponseConsumer { +/**Allows this impl to add request params/http headers before the request is fired + */ +default void setRpc(HttpRpc rpc){}; + +/**Process the response. + * Ensure that the whole stream is eaten up before this method returns + * The stream will be closed after the method returns + */ +Object accept(InputStream is) throws IOException; +} + +/**Provide the payload stream + * + */ +interface InputSupplier { +void write(OutputStream os) throws IOException; Review comment: Yes, this call directly writing to the stream to the server. if there is underlying failure, it results in an IOException This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466860061 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PlacementPlugin.java ## @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Implemented by external plugins to control replica placement and movement on the search cluster (as well as other things + * such as cluster elasticity?) when cluster changes are required (initiated elsewhere, most likely following a Collection + * API call). + */ +public interface PlacementPlugin { Review comment: The configuration part I didn't really do yet. Unclear to me how the `configure` method here would be used, since in order to get to it the plugin has to be loaded already... I was thinking defining the plugin class or classes in some solr configuration file (with the rest of the config). At least a single default plugin implementation that would be used for all placement needs or a default + other ones ones (with names that can then be selected by callers of the Collection API passing a `placement` parameter as suggested in the changes to `CollectionAdminRequest.java`). Are there similar examples in Solr code (loading plugins possibly on a per collection basis) that I can get inspiration from or reuse? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466862155 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/AddReplicasRequest.java ## @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +import java.util.Set; + +/** + * Request for creating one or more {@link Replica}'s for one or more {@link Shard}'s of an existing {@link SolrCollection}. + * The shard might or might not already exist, plugin code can easily find out by using {@link SolrCollection#getShards()} + * and verifying if the shard name(s) from {@link #getShardNames()} are there. + * + * As opposed to {@link CreateNewCollectionRequest}, the set of {@link Node}s on which the replicas should be placed + * is specified (defaults to being equal to the set returned by {@link Cluster#getLiveNodes()}). + * + * There is no extension between this interface and {@link CreateNewCollectionRequest} in either direction + * or from a common ancestor for readability. An ancestor could make sense and would be an "abstract interface" not intended + * to be implemented directly, but this does not exist in Java. + * + * Plugin code would likely treat the two types of requests differently since here existing {@link Replica}'s must be taken + * into account for placement whereas in {@link CreateNewCollectionRequest} no {@link Replica}'s are assumed to exist. + */ +public interface AddReplicasRequest extends Request { + /** + * The {@link SolrCollection} to add {@link Replica}(s) to. The replicas are to be added to a shard that might or might + * not yet exist when the plugin's {@link PlacementPlugin#computePlacement} is called. + */ + SolrCollection getCollection(); + + /** + * Shard name(s) for which new replicas placement should be computed. The shard(s) might exist or not (that's why this + * method returns a {@link Set} of {@link String}'s and not directly a set of {@link Shard} instances). + * + * Note the Collection API allows specifying the shard name or a {@code _route_} parameter. The Solr implementation will + * convert either specification into the relevant shard name so the plugin code doesn't have to worry about this. + */ + Set getShardNames(); + + /** Replicas should only be placed on nodes from the set returned by this method. */ + Set getTargetNodes(); Review comment: This will be implemented once (by me for example) and only consumed by the plugins. But I will clarify. The `Request`s are received by the plugins. The plugins send back `WorkOrder`s (and create them through a factory interface so don't have to worry about implementation anyway). If you can think of better names for these abstractions, I'm interested (esp work order doesn't feel totally right). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
noblepaul commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466860388 ## File path: solr/solrj/src/java/org/apache/solr/common/util/CallRouter.java ## @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +public interface CallRouter { +/** + * send to a specific node. usually admin requests + */ +CallRouter toNode(String nodeName); + +/** + * Make a request to any replica of the shard of type + */ +CallRouter toShard(String collection, String shard, ReplicaType type); + +/** + * Identify the shard using the route key and send the request to a given replica type + */ +CallRouter toShard(String collection, ReplicaType type, String routeKey); Review comment: it would conflict with the other method . we can't say if the second param is `shard` or `routeKey` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
noblepaul commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466859745 ## File path: solr/solrj/src/java/org/apache/solr/common/util/RpcFactory.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +import org.apache.solr.common.SolrException; +import org.apache.solr.common.params.CommonParams; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.function.Function; + +/**A factory that creates any type of RPC calls in Solr + * This is designed to create low level access to the RPC mechanism. + * This is agnostic of Solr documents or other internal concepts of Solr + * But it knows certain things + * a) how to locate a Solr core/replica + * b) basic HTTP access, + * c) serialization/deserialization is the responsibility of the code that is making a request + * + */ +public interface RpcFactory { + +CallRouter createCallRouter(); + +HttpRpc createHttpRpc(); + + +interface ResponseConsumer { +/**Allows this impl to add request params/http headers before the request is fired + */ +default void setRpc(HttpRpc rpc){}; + +/**Process the response. + * Ensure that the whole stream is eaten up before this method returns + * The stream will be closed after the method returns + */ +Object accept(InputStream is) throws IOException; +} + +/**Provide the payload stream + * + */ +interface InputSupplier { +void write(OutputStream os) throws IOException; Review comment: Yes, this call directly writes to the stream to the server. if there is underlying failure, it results in an IOException This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466864602 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/Cluster.java ## @@ -0,0 +1,46 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +import java.io.IOException; +import java.util.Optional; +import java.util.Set; + +/** + * A representation of the (initial) cluster state, providing information on which nodes are part of the cluster and a way + * to get to more detailed info. + * + * This instance can also be used as a {@link PropertyValueSource} if {@link PropertyKey}'s need to be specified with + * a global cluster target. + */ +public interface Cluster extends PropertyValueSource { + /** + * @return current set of live nodes. Never null, never empty (Solr wouldn't call the plugin if empty + * since no useful could then be done). + */ + Set getLiveNodes(); + + /** + * Returns info about the given collection if one exists. Because it is not expected for plugins to request info about + * a large number of collections, requests can only be made one by one. + * + * This is also the reason we do not return a {@link java.util.Map} or {@link Set} of {@link SolrCollection}'s here: it would be + * wasteful to fetch all data and fill such a map when plugin code likely needs info about at most one or two collections. + */ + Optional getCollection(String collectionName) throws IOException; Review comment: I will add this. To your use case: if the plugin knows the names of the predefined system collections not to colocate new replicas with, it can request them one by one by name. Where an iteration would be useful is if the plugin doesn't want to place replicas on any `Node` with replicas from any collection named `system-*-metric` for example. It would then have to iterate... I'm always thinking eventually SolrCloud will be scaling to hundreds of thousands collections per cluster so anything that's linear in execution with that number is not acceptable in such a case for placement computation. But I do understand it can be useful in low scale uses. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172946#comment-17172946 ] Marcus Eagan commented on SOLR-14708: - so far, [~tflobbe] test Case A fails in a few ways: Replication is not working properly. Recovery did not happen appropriately. Admin UI doesn't work either. There is more work to do on the back-compat front. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172947#comment-17172947 ] Marcus Eagan commented on SOLR-14708: - Also, master/slave is not very documented as well. It is straightforward, but annoying to work with, for sure. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172947#comment-17172947 ] Marcus Eagan edited comment on SOLR-14708 at 8/7/20, 7:18 AM: -- Also, master/slave is not very documented as well. It is straightforward, but annoying to work with, for sure because there is little documentation. was (Author: marcussorealheis): Also, master/slave is not very documented as well. It is straightforward, but annoying to work with, for sure. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466866058 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PropertyKeyFactory.java ## @@ -0,0 +1,61 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Factory used by the plugin to create property keys to request property values from Solr. + * + * Building of a {@link PropertyKey} requires specifying the target (context) from which the value of that key should be + * obtained. This is done by specifying the appropriate {@link PropertyValueSource}. + * For clarity, when only a single type of target is acceptable, the corresponding subtype of {@link PropertyValueSource} is used instead + * (for example {@link Node}). + */ +public interface PropertyKeyFactory { + /** + * Returns a property key to request the number of cores on a {@link Node}. + */ + PropertyKey createCoreCountKey(Node node); + + /** + * Returns a property key to request disk related info on a {@link Node}. + */ + PropertyKey createDiskInfoKey(Node node); + + /** + * Returns a property key to request the value of a system property on a {@link Node}. + * @param systemPropertyName the name of the system property to retrieve. + */ + PropertyKey createSystemPropertyKey(Node node, String systemPropertyName); + + /** + * Returns a property key to request the value of a metric. + * + * Not all metrics make sense everywhere, but metrics can be applied to different objects. For example + * SEARCHER.searcher.indexCommitSize would make sense for a given replica of a given shard of a given collection, + * and possibly in other contexts. + * + * @param metricSource The registry of the metric. For example a specific {@link Replica}. + * @param metricName for example SEARCHER.searcher.indexCommitSize. + */ + PropertyKey createMetricKey(PropertyValueSource metricSource, String metricName); Review comment: Maybe for metrics there's a need to add a registry name in addition to the `PropertyValueSource`? That would likely solve this issue. My thinking was that we might have metrics on other things than a `Node` or the JVM where a `Node` runs. Like metrics on a collection... If this is not the case, then I guess metrics should have a value source of `Node`, and then a registry name (that might better be expressed as enum if there are only two). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
noblepaul commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466859745 ## File path: solr/solrj/src/java/org/apache/solr/common/util/RpcFactory.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +import org.apache.solr.common.SolrException; +import org.apache.solr.common.params.CommonParams; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.function.Function; + +/**A factory that creates any type of RPC calls in Solr + * This is designed to create low level access to the RPC mechanism. + * This is agnostic of Solr documents or other internal concepts of Solr + * But it knows certain things + * a) how to locate a Solr core/replica + * b) basic HTTP access, + * c) serialization/deserialization is the responsibility of the code that is making a request + * + */ +public interface RpcFactory { + +CallRouter createCallRouter(); + +HttpRpc createHttpRpc(); + + +interface ResponseConsumer { +/**Allows this impl to add request params/http headers before the request is fired + */ +default void setRpc(HttpRpc rpc){}; + +/**Process the response. + * Ensure that the whole stream is eaten up before this method returns + * The stream will be closed after the method returns + */ +Object accept(InputStream is) throws IOException; +} + +/**Provide the payload stream + * + */ +interface InputSupplier { +void write(OutputStream os) throws IOException; Review comment: Yes, this call directly writes to the stream to the server. if there is an underlying failure, it results in an `IOException` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466868182 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/ReplicaPlacement.java ## @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Placement decision for a single {@link Replica}. Note this placement decision is used as part of a {@link WorkOrder}, + * it does not directly lead to the plugin code getting a corresponding {@link Replica} instance, nor does it require the + * plugin to provide a {@link Shard} instance (the plugin code gets such instances for existing replicas and shards in the + * cluster but does not create them directly for adding new replicas for new or existing shards). + * + * Captures the {@link Shard} (via the shard name), {@link Node} and {@link Replica.ReplicaType} of a Replica to be created. + */ +public interface ReplicaPlacement { Review comment: It is already tracked. As the plugin creates and returns a `WorkOrder` to tell Solr where to do what, it passes a reference to the `Request`. Solr code would be totally able to log the request and the corresponding placement decisions. `ReplicaPlacement` instances do not exist outside a `WorkOrder` in Solr (they do in plugin code but if not added to a `WorkOrder` it's as if they were not created). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172952#comment-17172952 ] Marcus Eagan commented on SOLR-14708: - There is also a very strong chance that I am not doing this right because it looks like some things have changed since I last ran a cluster in standalone mode. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] anshumg commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
anshumg commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466892859 ## File path: solr/solrj/src/java/org/apache/solr/common/util/CallRouter.java ## @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +public interface CallRouter { +/** + * send to a specific node. usually admin requests + */ +CallRouter toNode(String nodeName); + +/** + * Make a request to any replica of the shard of type + */ +CallRouter toShard(String collection, String shard, ReplicaType type); + +/** + * Identify the shard using the route key and send the request to a given replica type + */ +CallRouter toShard(String collection, ReplicaType type, String routeKey); Review comment: Ah yes. I misread the first signature as (String, String, String). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] anshumg commented on a change in pull request #1720: SOLR-14712 Standardize RPC calls in Solr
anshumg commented on a change in pull request #1720: URL: https://github.com/apache/lucene-solr/pull/1720#discussion_r466893636 ## File path: solr/solrj/src/java/org/apache/solr/common/util/RpcFactory.java ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.common.util; + +import org.apache.solr.common.SolrException; +import org.apache.solr.common.params.CommonParams; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.function.Function; + +/**A factory that creates any type of RPC calls in Solr + * This is designed to create low level access to the RPC mechanism. + * This is agnostic of Solr documents or other internal concepts of Solr + * But it knows certain things + * a) how to locate a Solr core/replica + * b) basic HTTP access, + * c) serialization/deserialization is the responsibility of the code that is making a request + * + */ +public interface RpcFactory { + +CallRouter createCallRouter(); + +HttpRpc createHttpRpc(); + + +interface ResponseConsumer { +/**Allows this impl to add request params/http headers before the request is fired + */ +default void setRpc(HttpRpc rpc){}; + +/**Process the response. + * Ensure that the whole stream is eaten up before this method returns + * The stream will be closed after the method returns + */ +Object accept(InputStream is) throws IOException; +} + +/**Provide the payload stream + * + */ +interface InputSupplier { +void write(OutputStream os) throws IOException; Review comment: Makes sense. The IDE didn’t get that part :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14684) CloudExitableDirectoryReaderTest failing about 25% of the time
[ https://issues.apache.org/jira/browse/SOLR-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172990#comment-17172990 ] Cao Manh Dat commented on SOLR-14684: - Ok, I figured out the problem for this issue. In the past we never check wether a request is already timeout or not before sending request to other shards. So in the test we can set timeAllowed with minimal value like 1 (1ms), SOLR-14354 actually do the check before sending query request to other shards. Therefore it leads to above failure, my gut feeling that change made by SOLR-14354 is correct, but I did not know a quick way to solving the test failure properly. Therefore I will revert the behaviour like we used to do and fire another issue for this which needs more thought and checks. > CloudExitableDirectoryReaderTest failing about 25% of the time > -- > > Key: SOLR-14684 > URL: https://issues.apache.org/jira/browse/SOLR-14684 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0) >Reporter: Erick Erickson >Priority: Major > Attachments: stdout > > > If I beast this on my local machine, it fails (non reproducibly of course) > about 1/4 of the time. Log attached. The test itself hasn't changed in 11 > months or so. > It looks like occasionally the calls throw an error rather than return > partial results with a message: "Time allowed to handle this request > exceeded:[]". > It's been failing very intermittently for a couple of years, but the failure > rate really picked up in the last couple of weeks. IDK whether the failures > prior to the last couple of weeks are the same root cause. > I'll do some spelunking to see if I can pinpoint the commit that made this > happen, but it'll take a while. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14719) Handling exceeding timeAllowed consistently
Cao Manh Dat created SOLR-14719: --- Summary: Handling exceeding timeAllowed consistently Key: SOLR-14719 URL: https://issues.apache.org/jira/browse/SOLR-14719 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat Continue from SOLR-14684 where HttpShardHandler should skipping routing requests to other shards if timeAllowed already exceeded. But I kinda feel we do not handling exceeding in timeAllowed consitently between different places in code (node that is aggregate query result vs nodes that execute the query). That leads to different error/response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14684) CloudExitableDirectoryReaderTest failing about 25% of the time
[ https://issues.apache.org/jira/browse/SOLR-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172992#comment-17172992 ] Cao Manh Dat commented on SOLR-14684: - Opened SOLR-14719 for that. > CloudExitableDirectoryReaderTest failing about 25% of the time > -- > > Key: SOLR-14684 > URL: https://issues.apache.org/jira/browse/SOLR-14684 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0) >Reporter: Erick Erickson >Priority: Major > Attachments: stdout > > > If I beast this on my local machine, it fails (non reproducibly of course) > about 1/4 of the time. Log attached. The test itself hasn't changed in 11 > months or so. > It looks like occasionally the calls throw an error rather than return > partial results with a message: "Time allowed to handle this request > exceeded:[]". > It's been failing very intermittently for a couple of years, but the failure > rate really picked up in the last couple of weeks. IDK whether the failures > prior to the last couple of weeks are the same root cause. > I'll do some spelunking to see if I can pinpoint the commit that made this > happen, but it'll take a while. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14630) CloudSolrClient doesn't pick correct core when server contains more shards
[ https://issues.apache.org/jira/browse/SOLR-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172998#comment-17172998 ] Ivan Djurasevic commented on SOLR-14630: [~cpoerschke] I'm sorry for late response. *First, i want to say that search and update are working, there are no problems with that operations.* Update and search are working because request is forwarded to correct shard from wrong shard. Main problems are overhead forwarding requests and loosing parameters from update parameters during forwarding requests. When we perform batch update we send some parameters(for example language code) in update request parameters, after that in update request processor chain we use that parameters(language code) and with them we create field names(SOURCE_EN_STORE, SOURCE_EN_NGRAM, SOURCE_EN_FUZZY, ...). To be more precise, client(application) creates documents with field (SOURCE=text), and set language English (en) in update request. In update request processor chain with SOURCE field and locale code from update request parameters we create new fields(SOURCE_EN_STORE, SOURCE_EN_NGRAM, SOURCE_EN_FUZZY, ...) . If you need more information, please let me know. > CloudSolrClient doesn't pick correct core when server contains more shards > -- > > Key: SOLR-14630 > URL: https://issues.apache.org/jira/browse/SOLR-14630 > Project: Solr > Issue Type: Bug > Components: SolrCloud, SolrJ >Affects Versions: 8.5.1, 8.5.2 >Reporter: Ivan Djurasevic >Priority: Major > > Precondition: create collection with 4 shards on one server. > During search and update, solr cloud client picks wrong core even _route_ > exists in query param. In BaseSolrClient class, method sendRequest, > > {code:java} > sortedReplicas.forEach( replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(ZkCoreNodeProps.getCoreUrl(replica.getBaseUrl(), > joinedInputCollections)); > } > }); > {code} > > Previous part of code adds base url(localhost:8983/solr/collection_name) to > theUrlList, it doesn't create core address(localhost:8983/solr/core_name). If > we change previous code to: > {quote} > {code:java} > sortedReplicas.forEach(replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(replica.getCoreUrl()); > } > });{code} > {quote} > Solr cloud client picks core which is defined with _route_ parameter. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13381) Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a PointField facet
[ https://issues.apache.org/jira/browse/SOLR-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173003#comment-17173003 ] Tobias Ibounig commented on SOLR-13381: --- This happened to us Solr upgrade form 7.7.2 to Solr 8.4.1. Changed all types from trie to point fields. A facet of type IntPointField with grouping then threw an exception, that it expected SORTED_SET and not NUMERIC or NUMERIC_SET (tried with single values and multivalued) {noformat} Caused by: java.lang.IllegalStateException: unexpected docvalues type SORTED_NUMERIC for field 'xxx' (expected one of [SORTED, SORTED_SET]). Re-index with correct docvalues type.{noformat} Switching back to a TrieField fix the issue. It feels like an overlooked edgecase, that should be fixed before removal of TrieFields (in 9.0?), otherwise there will be no workaround. > Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a > PointField facet > -- > > Key: SOLR-13381 > URL: https://issues.apache.org/jira/browse/SOLR-13381 > Project: Solr > Issue Type: Bug > Components: faceting >Affects Versions: 7.0, 7.6, 7.7, 7.7.1 > Environment: solr, solrcloud >Reporter: Zhu JiaJun >Priority: Major > Attachments: SOLR-13381.patch > > > Hey, > I got an "Unexpected docvalues type SORTED_NUMERIC" exception when I perform > group facet on an IntPointField. Debugging into the source code, the cause is > that internally the docvalue type for PointField is "NUMERIC" (single value) > or "SORTED_NUMERIC" (multi value), while the TermGroupFacetCollector class > requires the facet field must have a "SORTED" or "SOTRTED_SET" docvalue type: > [https://github.com/apache/lucene-solr/blob/2480b74887eff01f729d62a57b415d772f947c91/lucene/grouping/src/java/org/apache/lucene/search/grouping/TermGroupFacetCollector.java#L313] > > When I change schema for all int field to TrieIntField, the group facet then > work. Since internally the docvalue type for TrieField is SORTED (single > value) or SORTED_SET (multi value). > Regarding that the "TrieField" is depreciated in Solr7, please help on this > grouping facet issue for PointField. I also commented this issue in SOLR-7495. > > In addtional, all place of "${solr.tests.IntegerFieldType}" in the unit test > files seems to be using the "TrieintField", if change to "IntPointField", > some unit tests will fail, for example: > [https://github.com/apache/lucene-solr/blob/3de0b3671998cc9bc723d10f1b31ce48cbd4fa64/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L417] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r466918853 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -53,6 +53,8 @@ public boolean handleRequest() throws InterruptedException { * Whether to allow another request type to borrow a slot from this request rate limiter. Typically works fine * if there is a relatively lesser load on this request rate limiter's type compared to the others (think of skew). * @return true if allow, false otherwise + * + * @lucene.experimental -- Can cause slots to be blocked if a request borrows a slot and is itself long lived. */ public boolean allowSlotBorrowing() { synchronized (this) { Review comment: Good point. I was not sure around using availablePermits for this use but decided for using it. IMO the worst that can happen is that a request gets an indication from availablePermits as to the availability of a slot but fails to acquire it since another concurrent request has acquired the same. The approach that you suggest is more comprehensive, so changed to that, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173011#comment-17173011 ] Marcus Eagan commented on SOLR-14708: - Honestly [~tflobbe] there seems to be a lot of dependencies and other issues that challenge it. I will have to review in more closely with dedicated next week. Backward compatibility touches various parts of the application and will require a coordinated effort probably from you, AB, and may others. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
Tomoko Uchida created LUCENE-9448: - Summary: Make an eqivalent to Ant's "run" target for Luke module Key: LUCENE-9448 URL: https://issues.apache.org/jira/browse/LUCENE-9448 Project: Lucene - Core Issue Type: Sub-task Reporter: Tomoko Uchida With Ant build, Luke Swing app can be launched by "ant run" after checking out the source code. "ant run" allows developers to immediately see the effects of UI changes without creating the whole zip/tgz package (originally, it was suggested when integrating Luke to Lucene). In Gradle, {{:lucene:luke:run}} task would be easily implemented with {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173022#comment-17173022 ] Tomoko Uchida commented on LUCENE-9448: --- I forgot about this, [~erickerickson] reminded me (thanks Erick). It is not an ordinary build task but a helper for developers - [~dweiss] does it make sense we also have this in Gradle ? > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173034#comment-17173034 ] Dawid Weiss commented on LUCENE-9448: - Hmm... This would keep the gradle daemon running while the subprocess is launched. My opinion is that it makes little sense, to be honest. Rather, the assembly should be straightened up so that build/luke contains the scripts and dependencies required to run Luke. Then you could do: gradlew :lucene:luke:assemble and it'd compile everything needed, then maybe print something like: Luke is ready to launch, run it with: java -jar lucene/luke/build/luke/lukejar A proper classpath manifest and main class would be also helpful (then you don't have to run any scripts, just the jar itself). > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14720) Validate Sanctity of Request Type
Atri Sharma created SOLR-14720: -- Summary: Validate Sanctity of Request Type Key: SOLR-14720 URL: https://issues.apache.org/jira/browse/SOLR-14720 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Atri Sharma https://issues.apache.org/jira/browse/SOLR-13528 introduces a mechanism to identify between internal (server) and external (client) requests. Currently, this mechanism works on populating a relevant field in the request's headers. However, a rogue client can impersonate or fabricate a server request. This Jira tracks effort to validate that a client request's context is set correctly. We look to tap into the authentication loop to piggy back on the information provided there. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r466947788 ## File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java ## @@ -0,0 +1,167 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.servlet; + +import javax.servlet.FilterConfig; +import javax.servlet.http.HttpServletRequest; +import java.lang.invoke.MethodHandles; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.solr.client.solrj.SolrRequest; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.solr.common.params.CommonParams.SOLR_REQUEST_CONTEXT_PARAM; +import static org.apache.solr.common.params.CommonParams.SOLR_REQUEST_TYPE_PARAM; + +/** + * This class is responsible for managing rate limiting per request type. Rate limiters + * can be registered with this class against a corresponding type. There can be only one + * rate limiter associated with a request type. + * + * The actual rate limiting and the limits should be implemented in the corresponding RequestRateLimiter + * implementation. RateLimitManager is responsible for the orchestration but not the specifics of how the + * rate limiting is being done for a specific request type. + */ +public class RateLimitManager { + private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); + + public final static int DEFAULT_CONCURRENT_REQUESTS= (Runtime.getRuntime().availableProcessors()) * 3; + public final static long DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS = -1; + private final Map requestRateLimiterMap; + + // IMPORTANT: The slot from the corresponding rate limiter should be acquired before adding the request + // to this map. Subsequently, the request should be deleted from the map before the slot is released. + private final Map activeRequestsMap; + + public RateLimitManager() { +this.requestRateLimiterMap = new HashMap<>(); +this.activeRequestsMap = new ConcurrentHashMap<>(); + } + + // Handles an incoming request. The main orchestration code path, this method will + // identify which (if any) rate limiter can handle this request. Internal requests will not be + // rate limited + // Returns true if request is accepted for processing, false if it should be rejected + public boolean handleRequest(HttpServletRequest request) throws InterruptedException { +String requestContext = request.getHeader(SOLR_REQUEST_CONTEXT_PARAM); Review comment: https://issues.apache.org/jira/browse/SOLR-14720 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173042#comment-17173042 ] Uwe Schindler commented on LUCENE-9448: --- I still think that a "quick tester" not assembling everything into JAR files and just using the runtime classpath would be a good idea. That's my personal opinion. luke.jar is not a "fat jar", it still needs all dependencies on classpath. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173058#comment-17173058 ] Dawid Weiss commented on LUCENE-9448: - It doesn't need to be a fat jar if it had proper classpath attribute in the manifest (relative references to lib/*.jar). Then you can just launch it and classpath will be set automatically - no need for scripts to do that. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173065#comment-17173065 ] Uwe Schindler commented on LUCENE-9448: --- The problem is that the lib folder structure looks different in releases and in the local build system. I tried to set this up back at the time when Tomoko and I did the inital import. But this is unrelated to just allowing to start the GUI from gradle's command line. That's a simple task which depends on compile (or similar). Makes life easier and faster for quick tests. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173068#comment-17173068 ] Dawid Weiss commented on LUCENE-9448: - The local assemble task should sync everything needed from the runtime configuration - then you'd be running the "actual" distribution package. I don't mind having a 'run' task which wouldn't assemble anything, of course, but I don't think it makes practical sense as it'll hang the build in the IDE, for example. But that's up to you, really. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173070#comment-17173070 ] Tomoko Uchida commented on LUCENE-9448: --- While I try to write up my comment, the discussion has been proceeding... I will just throw in my understanding here, although there may be no new information. Luke is not a self-contained jar and the relative location of the dependent jars are different in building and packaging time, we can't specify build time Class-Path in the JAR Manifest. (Am I correct?) I'm not sticking to make a Gradle task at all, but I think an alternative which is run as easy as "ant run" would be needed. Let me keep open this until we find a good solution; anyway it is not urgent. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173072#comment-17173072 ] Uwe Schindler commented on LUCENE-9448: --- bq. I don't mind having a 'run' task which wouldn't assemble anything, of course, but I don't think it makes practical sense as it'll hang the build in the IDE, for example. But that's up to you, really. Why would that hang the IDE? If you dont execute the task and it's not part of the standard "build all", who cares? We have many "run-like" tasks which the IDE would never run, like "regenerate". > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173074#comment-17173074 ] Uwe Schindler commented on LUCENE-9448: --- One example: I would also like to have the "ant smoke-test", "ant jenkins-nightly", which is triggered by jenkins for easier CI configuration. That's not different to the current. Some "helper tasks" not used by IDEs, just for special use cases are perfectly fine, IMHO. You can argue to add scripts for this, but as we run builds on windows and linux, I tend to not use shell scripts for this. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173081#comment-17173081 ] Dawid Weiss commented on LUCENE-9448: - > Why would that hang the IDE? I'm guessing IntelliJ has a single worker gradle daemon guessing though. bq. Luke is not a self-contained jar and the relative location of the dependent jars are different in building and packaging time, we can't specify build time Class-Path in the JAR Manifest. (Am I correct?) They are different but they point at the same set of artifacts. It's really simple to sync libs and set proper relative manifest based on that. Look at this snippet, for example: https://github.com/carrot2/carrot2/blob/master/dcs/distribution/build.gradle#L38-L54 > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14721) RequestHandlerBase org.apache.solr.common.SolrException: No such core
anil created SOLR-14721: --- Summary: RequestHandlerBase org.apache.solr.common.SolrException: No such core Key: SOLR-14721 URL: https://issues.apache.org/jira/browse/SOLR-14721 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: clients - C# Affects Versions: 8.1 Environment: Testing Reporter: anil Hi Team, We are getting below exception where Core is getting unloaded during index rebuild in Solr 8.1 version RequestHandlerBase org.apache.solr.common.SolrException: No such core. Exception: Test:_index] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: No such core: Test_index at org.apache.solr.core.SolrCores.swap(SolrCores.java:268) at org.apache.solr.core.CoreContainer.swap(CoreContainer.java:1578) at org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$3(CoreAdminOperation.java:138) at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360) at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:502) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:305) at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(E
[jira] [Updated] (SOLR-14721) RequestHandlerBase org.apache.solr.common.SolrException: No such core
[ https://issues.apache.org/jira/browse/SOLR-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anil updated SOLR-14721: Security: Public (was: Private (Security Issue)) > RequestHandlerBase org.apache.solr.common.SolrException: No such core > -- > > Key: SOLR-14721 > URL: https://issues.apache.org/jira/browse/SOLR-14721 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - C# >Affects Versions: 8.1 > Environment: > Testing >Reporter: anil >Priority: Major > Labels: performance > > Hi Team, > We are getting below exception where Core is getting unloaded during index > rebuild in Solr 8.1 version > > RequestHandlerBase org.apache.solr.common.SolrException: No such core. > > Exception: > Test:_index] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: > No such core: Test_index > at org.apache.solr.core.SolrCores.swap(SolrCores.java:268) > at > org.apache.solr.core.CoreContainer.swap(CoreContainer.java:1578) > at > org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$3(CoreAdminOperation.java:138) > at > org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360) > at > org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at > org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796) > at > org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762) > at > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at org.eclipse.jetty.server.Server.handle(Server.java:502) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) > at > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) > at > org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411) >
[GitHub] [lucene-solr] CaoManhDat opened a new pull request #1724: SOLR-14684: CloudExitableDirectoryReaderTest failing about 25% of the time
CaoManhDat opened a new pull request #1724: URL: https://github.com/apache/lucene-solr/pull/1724 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14684) CloudExitableDirectoryReaderTest failing about 25% of the time
[ https://issues.apache.org/jira/browse/SOLR-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173089#comment-17173089 ] Cao Manh Dat commented on SOLR-14684: - I created a PR for this one, can you trying to reproduce the problem [~erickerickson]? Thanks a lot! > CloudExitableDirectoryReaderTest failing about 25% of the time > -- > > Key: SOLR-14684 > URL: https://issues.apache.org/jira/browse/SOLR-14684 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0) >Reporter: Erick Erickson >Priority: Major > Attachments: stdout > > Time Spent: 10m > Remaining Estimate: 0h > > If I beast this on my local machine, it fails (non reproducibly of course) > about 1/4 of the time. Log attached. The test itself hasn't changed in 11 > months or so. > It looks like occasionally the calls throw an error rather than return > partial results with a message: "Time allowed to handle this request > exceeded:[]". > It's been failing very intermittently for a couple of years, but the failure > rate really picked up in the last couple of weeks. IDK whether the failures > prior to the last couple of weeks are the same root cause. > I'll do some spelunking to see if I can pinpoint the commit that made this > happen, but it'll take a while. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14684) CloudExitableDirectoryReaderTest failing about 25% of the time
[ https://issues.apache.org/jira/browse/SOLR-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173088#comment-17173088 ] ASF subversion and git services commented on SOLR-14684: Commit 1b37c981a0ced4876455c9e5effa488d71b70160 in lucene-solr's branch refs/heads/jira/SOLR-14684 from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1b37c98 ] SOLR-14684: CloudExitableDirectoryReaderTest failing about 25% of the time > CloudExitableDirectoryReaderTest failing about 25% of the time > -- > > Key: SOLR-14684 > URL: https://issues.apache.org/jira/browse/SOLR-14684 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0) >Reporter: Erick Erickson >Priority: Major > Attachments: stdout > > > If I beast this on my local machine, it fails (non reproducibly of course) > about 1/4 of the time. Log attached. The test itself hasn't changed in 11 > months or so. > It looks like occasionally the calls throw an error rather than return > partial results with a message: "Time allowed to handle this request > exceeded:[]". > It's been failing very intermittently for a couple of years, but the failure > rate really picked up in the last couple of weeks. IDK whether the failures > prior to the last couple of weeks are the same root cause. > I'll do some spelunking to see if I can pinpoint the commit that made this > happen, but it'll take a while. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
madrob commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r466985880 ## File path: solr/core/src/test/org/apache/solr/servlet/TestRequestRateLimiter.java ## @@ -102,31 +103,101 @@ public Boolean call() throws Exception { try { future.get(); } catch (Exception e) { - assertTrue("Not true " + e.getMessage(), e.getMessage().contains("non ok status: 429, message:Too Many Requests")); + assertThat(e.getMessage(), containsString("non ok status: 429, message:Too Many Requests")); } } MockRequestRateLimiter mockQueryRateLimiter = (MockRequestRateLimiter) rateLimitManager.getRequestRateLimiter(SolrRequest.SolrRequestType.QUERY); - assertTrue("Incoming request count did not match. Expected == 25 incoming " + mockQueryRateLimiter.incomingRequestCount.get(), - mockQueryRateLimiter.incomingRequestCount.get() == 25); + assertEquals(mockQueryRateLimiter.incomingRequestCount.get(),25); assertTrue("Incoming accepted new request count did not match. Expected 5 incoming " + mockQueryRateLimiter.acceptedNewRequestCount.get(), mockQueryRateLimiter.acceptedNewRequestCount.get() < 25); assertTrue("Incoming rejected new request count did not match. Expected 20 incoming " + mockQueryRateLimiter.rejectedRequestCount.get(), mockQueryRateLimiter.rejectedRequestCount.get() > 0); - assertTrue("Incoming total processed requests count did not match. Expected " + mockQueryRateLimiter.incomingRequestCount.get() + " incoming " - + (mockQueryRateLimiter.acceptedNewRequestCount.get() + mockQueryRateLimiter.rejectedRequestCount.get()), - (mockQueryRateLimiter.acceptedNewRequestCount.get() + mockQueryRateLimiter.rejectedRequestCount.get()) == mockQueryRateLimiter.incomingRequestCount.get()); + assertEquals(mockQueryRateLimiter.acceptedNewRequestCount.get() + mockQueryRateLimiter.rejectedRequestCount.get(), + mockQueryRateLimiter.incomingRequestCount.get()); +} finally { + executor.shutdown(); +} + } + + @Test + public void testSlotBorrowing() throws Exception { +CloudSolrClient client = cluster.getSolrClient(); +client.setDefaultCollection(SECOND_COLLECTION); + +CollectionAdminRequest.createCollection(SECOND_COLLECTION, 1, 1).process(client); +cluster.waitForActiveCollection(SECOND_COLLECTION, 1, 1); + + +SolrDispatchFilter solrDispatchFilter = cluster.getJettySolrRunner(0).getSolrDispatchFilter(); + +RequestRateLimiter.RateLimiterConfig queryRateLimiterConfig = new RequestRateLimiter.RateLimiterConfig(SolrRequest.SolrRequestType.QUERY, +true, 1, DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS, 5 /* allowedRequests */, true /* isSlotBorrowing */); +RequestRateLimiter.RateLimiterConfig indexRateLimiterConfig = new RequestRateLimiter.RateLimiterConfig(SolrRequest.SolrRequestType.UPDATE, +true, 1, DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS, 5 /* allowedRequests */, true /* isSlotBorrowing */); +// We are fine with a null FilterConfig here since we ensure that MockBuilder never invokes its parent +RateLimitManager.Builder builder = new MockBuilder(null /*dummy FilterConfig */, new MockRequestRateLimiter(queryRateLimiterConfig, 5), new MockRequestRateLimiter(indexRateLimiterConfig, 5)); +RateLimitManager rateLimitManager = builder.build(); + +solrDispatchFilter.replaceRateLimitManager(rateLimitManager); + +for (int i = 0; i < 100; i++) { + SolrInputDocument doc = new SolrInputDocument(); + + doc.setField("id", i); + doc.setField("text", "foo"); + client.add(doc); +} + +client.commit(); + +ExecutorService executor = ExecutorUtil.newMDCAwareCachedThreadPool("threadpool"); +List> callableList = new ArrayList<>(); +List> futures; + +try { + for (int i = 0; i < 25; i++) { +callableList.add(() -> { + try { +QueryResponse response = client.query(new SolrQuery("*:*")); + +if (response.getResults().getNumFound() > 0) { + assertEquals(100, response.getResults().getNumFound()); +} + } catch (Exception e) { +throw new RuntimeException(e.getMessage()); + } + + return true; +}); + } + + futures = executor.invokeAll(callableList); + + for (Future future : futures) { +try { + future.get(); Review comment: I’m just going by memory here, but I thought future.get gives you the return value of the callable which should always be true in this case for success? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastru
[jira] [Commented] (LUCENE-9448) Make an eqivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173097#comment-17173097 ] Tomoko Uchida commented on LUCENE-9448: --- [~dweiss] thanks for the pointer. I'll open another issue for the jar manifest. > Make an eqivalent to Ant's "run" target for Luke module > --- > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9448) Make an equivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-9448: Summary: Make an equivalent to Ant's "run" target for Luke module (was: Make an eqivalent to Ant's "run" target for Luke module) > Make an equivalent to Ant's "run" target for Luke module > > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an equivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173099#comment-17173099 ] Dawid Weiss commented on LUCENE-9448: - You can do this here as well, Tomoko - these are related things anyway. > Make an equivalent to Ant's "run" target for Luke module > > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14684) CloudExitableDirectoryReaderTest failing about 25% of the time
[ https://issues.apache.org/jira/browse/SOLR-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173100#comment-17173100 ] Erick Erickson commented on SOLR-14684: --- Actually, as luck would have it, I can't try this until probably Sunday or Monday. We lost power due to a storm and the machine I can reliably reproduce it on won't be available until then. I'll leave a note to myself to do it as soon as I can. > CloudExitableDirectoryReaderTest failing about 25% of the time > -- > > Key: SOLR-14684 > URL: https://issues.apache.org/jira/browse/SOLR-14684 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0) >Reporter: Erick Erickson >Priority: Major > Attachments: stdout > > Time Spent: 10m > Remaining Estimate: 0h > > If I beast this on my local machine, it fails (non reproducibly of course) > about 1/4 of the time. Log attached. The test itself hasn't changed in 11 > months or so. > It looks like occasionally the calls throw an error rather than return > partial results with a message: "Time allowed to handle this request > exceeded:[]". > It's been failing very intermittently for a couple of years, but the failure > rate really picked up in the last couple of weeks. IDK whether the failures > prior to the last couple of weeks are the same root cause. > I'll do some spelunking to see if I can pinpoint the commit that made this > happen, but it'll take a while. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9448) Make an equivalent to Ant's "run" target for Luke module
[ https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173109#comment-17173109 ] Erick Erickson commented on LUCENE-9448: I mentioned this to Tomoko when I got back to SOLR-13412, "Make the Lucene Luke module available from a Solr distribution" (really, just making it another command from bin/solr or similar). I eventually figured out that we don't actually package up the Luke jar or add it to the distro in the ant build and dropped it, and just this week got back to thinking about it. It seemed like something that'd be useful to have accessible from Solr *if it was easy*. I wanted to mention this as an input into the discussion of how to build Luke in the Gradle world. That said, do whatever is easiest from your perspective. The motivation behind SOLR-13412 was just that if it was easy to access from a distro, then I can imagine some Solr users finding it useful. If it's not simple I'll just close the Solr JIRA. Solr users who need/want to access the indexes via Luke may be comfortable downloading Solr and running Luke from a gradle anyway. > Make an equivalent to Ant's "run" target for Luke module > > > Key: LUCENE-9448 > URL: https://issues.apache.org/jira/browse/LUCENE-9448 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Tomoko Uchida >Priority: Minor > > With Ant build, Luke Swing app can be launched by "ant run" after checking > out the source code. "ant run" allows developers to immediately see the > effects of UI changes without creating the whole zip/tgz package (originally, > it was suggested when integrating Luke to Lucene). > In Gradle, {{:lucene:luke:run}} task would be easily implemented with > {{JavaExec}}, I think. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13412) Make the Lucene Luke module available from a Solr distribution
[ https://issues.apache.org/jira/browse/SOLR-13412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173135#comment-17173135 ] Erick Erickson commented on SOLR-13412: --- Getting back to this. This isn't part of the Lucene Gradle build yet, see LUCENE-9488. But apart from that, my original thought was "well, if Luke is easy to access, let's just make it available from Solr". Then I discovered that we don't actually package it all up, it's only accessible in the Ant world by executing an Ant target from the source tree. It's unclear whether Gradle will be the same. It's also unclear whether, even if Luke is packaged, whether it'd be part of the Lucene distro, those are TBD. So I have some of questions about what to do here: 1> First and foremost, how much effort is this worth? Would the universe of Solr users who would find access the stand-alone Luke app useful be OK with having to download the source code and execute a Gradle or Ant task? If the latter, I can just close this. I think it's arguable that the Luke Request Handler is sufficient for most Solr users, and those users who would find the standalone app useful can build it. 1.1> I also don't particularly want to answer a bunch of questions about "I started Luke on my laptop and I can't find the indexes for my collection that are on AWS" ;) 2> If we do include it, should we bloat the distro by adding Luke stand-alone or should it be a package? My current thinking is that if the stand-alone Luke app is packaged with Lucene automagically, I'll continue with this JIRA. I don't think we should add it to the distro separately though, so if we can build the standalone app easily and add it as a package that makes sense. If neither of the above, just close this as "won't fix". > Make the Lucene Luke module available from a Solr distribution > -- > > Key: SOLR-13412 > URL: https://issues.apache.org/jira/browse/SOLR-13412 > Project: Solr > Issue Type: Improvement >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: SOLR-13412.patch > > > Now that [~Tomoko Uchida] has put in a great effort to bring Luke into the > project, I think it would be good to be able to access it from a Solr distro. > I want to go to the right place under the Solr install directory and start > Luke up to examine the local indexes. > This ticket is explicitly _not_ about accessing it from the admin UI, Luke is > a stand-alone app that must be invoked on the node that has a Lucene index on > the local filesystem > We need to > * have it included in Solr when running "ant package". > * add some bits to the ref guide on how to invoke > ** Where to invoke it from > ** mention anything that has to be installed. > ** any other "gotchas" someone just installing Solr should be aware of. > * Ant should not be necessary. > * > > I'll assign this to myself to keep track of, but would not be offended in the > least if someone with more knowledge of "ant package" and the like wanted to > take it over ;) > If we can do it at all -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14721) RequestHandlerBase org.apache.solr.common.SolrException: No such core
[ https://issues.apache.org/jira/browse/SOLR-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-14721. --- Resolution: Incomplete Please raise questions like this on the user's list, we try to reserve JIRAs for known bugs/enhancements rather than usage questions. At first glance this looks a lot like something in your environment, it's the first I've seen this question. See: http://lucene.apache.org/solr/community.html#mailing-lists-irc there are links to both Lucene and Solr mailing lists there. A _lot_ more people will see your question on that list and may be able to help more quickly. There's not enough information here to even begin to diagnose the problem, no steps to reproduce, etc. You might want to review: https://wiki.apache.org/solr/UsingMailingLists If it's determined that this really is a code issue or enhancement to Lucene or Solr and not a configuration/usage problem, we can raise a new JIRA or reopen this one. > RequestHandlerBase org.apache.solr.common.SolrException: No such core > -- > > Key: SOLR-14721 > URL: https://issues.apache.org/jira/browse/SOLR-14721 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - C# >Affects Versions: 8.1 > Environment: > Testing >Reporter: anil >Priority: Major > Labels: performance > > Hi Team, > We are getting below exception where Core is getting unloaded during index > rebuild in Solr 8.1 version > > RequestHandlerBase org.apache.solr.common.SolrException: No such core. > > Exception: > Test:_index] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: > No such core: Test_index > at org.apache.solr.core.SolrCores.swap(SolrCores.java:268) > at > org.apache.solr.core.CoreContainer.swap(CoreContainer.java:1578) > at > org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$3(CoreAdminOperation.java:138) > at > org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360) > at > org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at > org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796) > at > org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762) > at > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) > at > org.eclipse.jetty.server.handler.HandlerWrapper.han
[jira] [Resolved] (SOLR-13381) Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a PointField facet
[ https://issues.apache.org/jira/browse/SOLR-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-13381. --- Resolution: Information Provided First of all, any time you change your schema you should re-index from scratch. There are a very few exceptions to this rule, and changing field types is not one of them. That means completely removing your index. If you can't delete your collection and start over, I strongly advise that you actually create a new collection and index to that after the schema changes, after which you can create an alias. Even if you haven't changed type, this partcular error often results from changing the multiValued attribute on the field. This is another schema change that requires re-indexing from scratch as outlined above. As for Trie/Point, Trie has, indeed, been deprecated in Lucene. Points-based fields are superior for range queries. However, they are slower for individual value lookup, i.e. numeric_field:1. Your choices are 1> go ahead and use Trie fields. They'll be supported for some time. 2> use point fields if you can live with the slowdown mentioned above. This doesn't become apparent unless you're looking at a lot of values. 3> I know this sounds weird, but use a copyField to a string-based field for individual lookups and when you need that functionality, search on that. Please use the user's list for questions before raising a JIRA. http://lucene.apache.org/solr/community.html#mailing-lists-irc there are links to both Lucene and Solr mailing lists there. > Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a > PointField facet > -- > > Key: SOLR-13381 > URL: https://issues.apache.org/jira/browse/SOLR-13381 > Project: Solr > Issue Type: Bug > Components: faceting >Affects Versions: 7.0, 7.6, 7.7, 7.7.1 > Environment: solr, solrcloud >Reporter: Zhu JiaJun >Priority: Major > Attachments: SOLR-13381.patch > > > Hey, > I got an "Unexpected docvalues type SORTED_NUMERIC" exception when I perform > group facet on an IntPointField. Debugging into the source code, the cause is > that internally the docvalue type for PointField is "NUMERIC" (single value) > or "SORTED_NUMERIC" (multi value), while the TermGroupFacetCollector class > requires the facet field must have a "SORTED" or "SOTRTED_SET" docvalue type: > [https://github.com/apache/lucene-solr/blob/2480b74887eff01f729d62a57b415d772f947c91/lucene/grouping/src/java/org/apache/lucene/search/grouping/TermGroupFacetCollector.java#L313] > > When I change schema for all int field to TrieIntField, the group facet then > work. Since internally the docvalue type for TrieField is SORTED (single > value) or SORTED_SET (multi value). > Regarding that the "TrieField" is depreciated in Solr7, please help on this > grouping facet issue for PointField. I also commented this issue in SOLR-7495. > > In addtional, all place of "${solr.tests.IntegerFieldType}" in the unit test > files seems to be using the "TrieintField", if change to "IntPointField", > some unit tests will fail, for example: > [https://github.com/apache/lucene-solr/blob/3de0b3671998cc9bc723d10f1b31ce48cbd4fa64/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L417] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for placement plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r467083780 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PropertyKeyFactory.java ## @@ -0,0 +1,61 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Factory used by the plugin to create property keys to request property values from Solr. + * + * Building of a {@link PropertyKey} requires specifying the target (context) from which the value of that key should be + * obtained. This is done by specifying the appropriate {@link PropertyValueSource}. + * For clarity, when only a single type of target is acceptable, the corresponding subtype of {@link PropertyValueSource} is used instead + * (for example {@link Node}). + */ +public interface PropertyKeyFactory { + /** + * Returns a property key to request the number of cores on a {@link Node}. + */ + PropertyKey createCoreCountKey(Node node); + + /** + * Returns a property key to request disk related info on a {@link Node}. + */ + PropertyKey createDiskInfoKey(Node node); + + /** + * Returns a property key to request the value of a system property on a {@link Node}. + * @param systemPropertyName the name of the system property to retrieve. + */ + PropertyKey createSystemPropertyKey(Node node, String systemPropertyName); + + /** + * Returns a property key to request the value of a metric. + * + * Not all metrics make sense everywhere, but metrics can be applied to different objects. For example + * SEARCHER.searcher.indexCommitSize would make sense for a given replica of a given shard of a given collection, + * and possibly in other contexts. + * + * @param metricSource The registry of the metric. For example a specific {@link Replica}. + * @param metricName for example SEARCHER.searcher.indexCommitSize. + */ + PropertyKey createMetricKey(PropertyValueSource metricSource, String metricName); Review comment: Is `solr.jetty` of any use or `solr.node` and `solr.jvm` are sufficient? I see `SolrInfoBean.Group` entries other than `node`, `jvm`, `jetty` are: - `collection`, `shard`, `cluster`: these can be inferred from `PropertyValueSource` type, - `core`: likely not needed for placement computation, - `overseer`: doesn't seem to be used This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9107) CommonsTermsQuery with huge no. of terms slower with top-k scoring
[ https://issues.apache.org/jira/browse/LUCENE-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincenzo D'Amore updated LUCENE-9107: - Attachment: Screenshot 2020-08-07 at 16.20.01.png > CommonsTermsQuery with huge no. of terms slower with top-k scoring > -- > > Key: LUCENE-9107 > URL: https://issues.apache.org/jira/browse/LUCENE-9107 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 8.3 >Reporter: Tommaso Teofili >Priority: Major > Attachments: Screenshot 2020-08-07 at 16.20.01.png > > > In [1] a {{CommonTermsQuery}} is used in order to perform a query with lots > of (duplicate) terms. Using a max term frequency cutoff of 0.999 for low > frequency terms, the query, although big, finishes in around 2-300ms with > Lucene 7.6.0. > However, when upgrading the code to Lucene 8.x, the query runs in 2-3s > instead [2]. > After digging a bit into it it seems that the regression in speed comes from > the fact that top-k scoring introduced by default in version 8 is causing > that, not sure "where" exactly in the code though. > When switching back to complete hit scoring [3], the speed goes back to the > initial 2-300ms also in Lucene 8.3.x. > It'd be nice to understand the reason why this is happening and if it is only > concerning {{CommonTermsQuery}} or affecting {{BooleanQuery}} as well. > If this is a case that depends on the data and application involved (Anserini > in this case), the application should handle it, otherwise if it is a > regression/bug in Lucene it'd be nice to fix it. > [1] : > https://github.com/tteofili/Anserini-embeddings/blob/nnsearch/src/main/java/io/anserini/embeddings/nn/fw/FakeWordsRunner.java > [2] : > https://github.com/castorini/anserini/blob/master/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java > [3] : > https://github.com/tteofili/anserini/blob/ann-paper-reproduce/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java#L174 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173011#comment-17173011 ] Marcus Eagan edited comment on SOLR-14708 at 8/7/20, 2:54 PM: -- Honestly [~tflobbe] there seems to be a lot of dependencies and other issues that challenge it. I will have to review in more closely with dedicated next week. Backward compatibility touches various parts of the application and will require a coordinated effort probably from you, AB, and may others. FastLRU Cache seems to exist in 8.6 still. This causes problems for replication in the tests. was (Author: marcussorealheis): Honestly [~tflobbe] there seems to be a lot of dependencies and other issues that challenge it. I will have to review in more closely with dedicated next week. Backward compatibility touches various parts of the application and will require a coordinated effort probably from you, AB, and may others. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9107) CommonsTermsQuery with huge no. of terms slower with top-k scoring
[ https://issues.apache.org/jira/browse/LUCENE-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincenzo D'Amore updated LUCENE-9107: - Attachment: image-2020-08-07-16-54-27-905.png > CommonsTermsQuery with huge no. of terms slower with top-k scoring > -- > > Key: LUCENE-9107 > URL: https://issues.apache.org/jira/browse/LUCENE-9107 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 8.3 >Reporter: Tommaso Teofili >Priority: Major > Attachments: Screenshot 2020-08-07 at 16.20.01.png, > image-2020-08-07-16-54-27-905.png > > > In [1] a {{CommonTermsQuery}} is used in order to perform a query with lots > of (duplicate) terms. Using a max term frequency cutoff of 0.999 for low > frequency terms, the query, although big, finishes in around 2-300ms with > Lucene 7.6.0. > However, when upgrading the code to Lucene 8.x, the query runs in 2-3s > instead [2]. > After digging a bit into it it seems that the regression in speed comes from > the fact that top-k scoring introduced by default in version 8 is causing > that, not sure "where" exactly in the code though. > When switching back to complete hit scoring [3], the speed goes back to the > initial 2-300ms also in Lucene 8.3.x. > It'd be nice to understand the reason why this is happening and if it is only > concerning {{CommonTermsQuery}} or affecting {{BooleanQuery}} as well. > If this is a case that depends on the data and application involved (Anserini > in this case), the application should handle it, otherwise if it is a > regression/bug in Lucene it'd be nice to fix it. > [1] : > https://github.com/tteofili/Anserini-embeddings/blob/nnsearch/src/main/java/io/anserini/embeddings/nn/fw/FakeWordsRunner.java > [2] : > https://github.com/castorini/anserini/blob/master/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java > [3] : > https://github.com/tteofili/anserini/blob/ann-paper-reproduce/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java#L174 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9107) CommonsTermsQuery with huge no. of terms slower with top-k scoring
[ https://issues.apache.org/jira/browse/LUCENE-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincenzo D'Amore updated LUCENE-9107: - Attachment: Screenshot 2020-08-07 at 16.20.05.png > CommonsTermsQuery with huge no. of terms slower with top-k scoring > -- > > Key: LUCENE-9107 > URL: https://issues.apache.org/jira/browse/LUCENE-9107 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 8.3 >Reporter: Tommaso Teofili >Priority: Major > Attachments: Screenshot 2020-08-07 at 16.20.01.png, Screenshot > 2020-08-07 at 16.20.05.png, image-2020-08-07-16-54-27-905.png > > > In [1] a {{CommonTermsQuery}} is used in order to perform a query with lots > of (duplicate) terms. Using a max term frequency cutoff of 0.999 for low > frequency terms, the query, although big, finishes in around 2-300ms with > Lucene 7.6.0. > However, when upgrading the code to Lucene 8.x, the query runs in 2-3s > instead [2]. > After digging a bit into it it seems that the regression in speed comes from > the fact that top-k scoring introduced by default in version 8 is causing > that, not sure "where" exactly in the code though. > When switching back to complete hit scoring [3], the speed goes back to the > initial 2-300ms also in Lucene 8.3.x. > It'd be nice to understand the reason why this is happening and if it is only > concerning {{CommonTermsQuery}} or affecting {{BooleanQuery}} as well. > If this is a case that depends on the data and application involved (Anserini > in this case), the application should handle it, otherwise if it is a > regression/bug in Lucene it'd be nice to fix it. > [1] : > https://github.com/tteofili/Anserini-embeddings/blob/nnsearch/src/main/java/io/anserini/embeddings/nn/fw/FakeWordsRunner.java > [2] : > https://github.com/castorini/anserini/blob/master/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java > [3] : > https://github.com/tteofili/anserini/blob/ann-paper-reproduce/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java#L174 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9107) CommonsTermsQuery with huge no. of terms slower with top-k scoring
[ https://issues.apache.org/jira/browse/LUCENE-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173187#comment-17173187 ] Vincenzo D'Amore commented on LUCENE-9107: -- Hi, I did a little step further trying to identify the difference of performance using CommonTermsQuery with different versions of Solr (7.3.1 vs 8.6.0). In this fork of anserini repo branch test_8.6.0 [https://github.com/freedev/anserini/blob/test_8.6.0] There I was trying the ann sample, here the steps to reproduce the problem: copy and build {quote}{{git clone [https://github.com/freedev/anserini.git]}} {{git checkout test_8.6.0}} {{mvn -Prelease clean package}} {quote} create the lucene index {quote}{{java -cp target/anserini-0.9.5-SNAPSHOT-fatjar.jar io.anserini.ann.IndexVectors -input glove.6B.300d.txt -path glove300-idx-8.6.0 -encoding fw}} {quote} reproduce the issue (the vector used for the world apple is hardcoded into the ApproximateNearestNeighborSearch main) {quote}{{java -cp target/anserini-0.9.5-SNAPSHOT-fatjar.jar io.anserini.ann.ApproximateNearestNeighborSearch -input glove.6B.300d.txt -path glove300-idx-8.6.0 -encoding fw -word apple}} {quote} This is the VisualVM Sampler output after having monitored {{ApproximateNearestNeighborSearch}} with Java Flight Recorder !image-2020-08-07-16-54-27-905.png|width=921,height=609! Changing the line [186 in ApproximateNearestNeighborSearch|https://github.com/freedev/anserini/blob/test_8.6.0/src/main/java/io/anserini/ann/ApproximateNearestNeighborSearch.java#L186] from: {{TopScoreDocCollector.create(indexArgs.depth, 0);}} to: {{TopScoreDocCollector.create(indexArgs.depth, Integer.MAX_VALUE);}} greately reduces the time spent (from ~2 sec to 3-400 milliseconds), see the screenshot: !Screenshot 2020-08-07 at 16.20.05.png|width=927,height=613! > CommonsTermsQuery with huge no. of terms slower with top-k scoring > -- > > Key: LUCENE-9107 > URL: https://issues.apache.org/jira/browse/LUCENE-9107 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 8.3 >Reporter: Tommaso Teofili >Priority: Major > Attachments: Screenshot 2020-08-07 at 16.20.01.png, Screenshot > 2020-08-07 at 16.20.05.png, image-2020-08-07-16-54-27-905.png > > > In [1] a {{CommonTermsQuery}} is used in order to perform a query with lots > of (duplicate) terms. Using a max term frequency cutoff of 0.999 for low > frequency terms, the query, although big, finishes in around 2-300ms with > Lucene 7.6.0. > However, when upgrading the code to Lucene 8.x, the query runs in 2-3s > instead [2]. > After digging a bit into it it seems that the regression in speed comes from > the fact that top-k scoring introduced by default in version 8 is causing > that, not sure "where" exactly in the code though. > When switching back to complete hit scoring [3], the speed goes back to the > initial 2-300ms also in Lucene 8.3.x. > It'd be nice to understand the reason why this is happening and if it is only > concerning {{CommonTermsQuery}} or affecting {{BooleanQuery}} as well. > If this is a case that depends on the data and application involved (Anserini > in this case), the application should handle it, otherwise if it is a > regression/bug in Lucene it'd be nice to fix it. > [1] : > https://github.com/tteofili/Anserini-embeddings/blob/nnsearch/src/main/java/io/anserini/embeddings/nn/fw/FakeWordsRunner.java > [2] : > https://github.com/castorini/anserini/blob/master/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java > [3] : > https://github.com/tteofili/anserini/blob/ann-paper-reproduce/src/main/java/io/anserini/analysis/vectors/ApproximateNearestNeighborEval.java#L174 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14630) CloudSolrClient doesn't pick correct core when server contains more shards
[ https://issues.apache.org/jira/browse/SOLR-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173188#comment-17173188 ] Christine Poerschke commented on SOLR-14630: {quote}I'm sorry for late response. {quote} No problem, we all contribute here as and when time permits. Thank you for returning and continuing the investigation here! {quote}Update and search are working because request is forwarded to correct shard from wrong shard. {quote} That's good to know and have clarified, thanks. {quote}Main problems are overhead forwarding requests and loosing parameters from update parameters during forwarding requests. When we perform batch update ... {quote} Yes, if a request has to be forwarded there would be some overhead. Loss of parameters during the forwarding, that is new information, interesting. When you say "batch update", do you mean more than one document in the same request or perhaps something else? If the batch size was one then does the issue happen also, I wonder? {quote}... after that in update request processor chain ... {quote} I'm not very familiar with update request processor chains but the [https://lucene.apache.org/solr/guide/8_6/update-request-processors.html#update-processors-in-solrcloud] documentation was useful and the SOLR-8030 ticket mentioned in it sounds interesting. Anyway, going back to the proposed change: {code:java} String joinedInputCollections = StrUtils.join(inputCollections, ','); Set seenNodes = new HashSet<>(); sortedReplicas.forEach( replica -> { if (seenNodes.add(replica.getNodeName())) { - theUrlList.add(ZkCoreNodeProps.getCoreUrl(replica.getBaseUrl(), joinedInputCollections)); + theUrlList.add(replica.getCoreUrl()); } }); {code} Thinking out aloud ... * What if {{inputCollections}} contained more than one element? * What if {{inputCollections}} contained an alias that was resolved at [line 1080|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.2/solr/solrj/src/java/org/apache/solr/client/solrj/impl/BaseCloudSolrClient.java#L1080], does it matter that before the alias (e.g. {{collection_one}}) was appended but now a core name (e.g. {{collection1_shard2_replica1}}) is appended? * A {{route}} can designate multiple shards (right?) and in the {{seenNodes.add(replica.getNodeName())}} check, what is node name again, it's localhost_8983_solr right? If so then that check guards against multiple {{localhost:8983/solr/*}} i.e. if the {{route}} designated shard2 and shard4 we would get {{localhost:8983/solr/collection1_shard2_replica1}} or {{localhost:8983/solr/collection1_shard4_replica1}} but not both. Would {{collection1_shard2_replica1}} do its stuff and forward to {{collection1_shard4_replica1}} or would there be no forwarding? * Might _"If a route key was supplied and ... then do new-stuff else do existing-stuff"_ specialisation work? It could complicate the already quite complicated code though, hmm. * Might there be other solutions to the problem? > CloudSolrClient doesn't pick correct core when server contains more shards > -- > > Key: SOLR-14630 > URL: https://issues.apache.org/jira/browse/SOLR-14630 > Project: Solr > Issue Type: Bug > Components: SolrCloud, SolrJ >Affects Versions: 8.5.1, 8.5.2 >Reporter: Ivan Djurasevic >Priority: Major > > Precondition: create collection with 4 shards on one server. > During search and update, solr cloud client picks wrong core even _route_ > exists in query param. In BaseSolrClient class, method sendRequest, > > {code:java} > sortedReplicas.forEach( replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(ZkCoreNodeProps.getCoreUrl(replica.getBaseUrl(), > joinedInputCollections)); > } > }); > {code} > > Previous part of code adds base url(localhost:8983/solr/collection_name) to > theUrlList, it doesn't create core address(localhost:8983/solr/core_name). If > we change previous code to: > {quote} > {code:java} > sortedReplicas.forEach(replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(replica.getCoreUrl()); > } > });{code} > {quote} > Solr cloud client picks core which is defined with _route_ parameter. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] HoustonPutman closed pull request #1717: Updating jenkins links to the new cluster.
HoustonPutman closed pull request #1717: URL: https://github.com/apache/lucene-solr/pull/1717 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for placement plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r467102867 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/MetricPropertyValue.java ## @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * A {@link PropertyValue} representing a metric on the target {@link PropertyValueSource}. + * Note there might be overlap with {@link SystemLoadPropertyValue} (only applicable to {@link Node}'s), may need to clarify. + */ +public interface MetricPropertyValue extends PropertyValue { + /** + * Returns the metric value from the {@link PropertyValueSource} on which it was retrieved. + * TODO: what type should the metric be? Maybe offer multiple getters for different java types and have each metric implement the right one and throw from the wrong ones? This avoids casting... Review comment: I've created the number and string property value types but I'm unclear on the Map one so we need to discuss this further. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14708) Backward-Compatible Replication
[ https://issues.apache.org/jira/browse/SOLR-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173195#comment-17173195 ] Marcus Eagan commented on SOLR-14708: - Also, even with this change aside, Solr 9 is very far from back compatibility in other areas. > Backward-Compatible Replication > --- > > Key: SOLR-14708 > URL: https://issues.apache.org/jira/browse/SOLR-14708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Marcus Eagan >Priority: Critical > > In [SOLR-14702|https://issues.apache.org/jira/browse/SOLR-14702] I proposed > that we remove master/slave terminology from the Solr codebase. Now that's > complete, we need to ensure it is backward compatible to support rolling > upgrades from 8.7.x to 9.x because we really ought not to make it harder to > upgrade Solr. > Tomas offered a helpful path in a now abandoned PR: > {quote}One way to get back compatibility and rolling upgrades could be to > make 9.x code be able to read previous formats, but write new format, and > make 8.x (since 8.7) read new and old, but write old? Anyone wanting to do a > rolling upgrade to 9 would have to be on at least 8.7. Rolling upgrades to > 8.7 would still work. > All the code other than the requests/responses could be changed in 8_x > branch, in addition to master. > {quote} > The approach that we will take is to add a ternary operator in 9_X to accept > parameter values for the legacy verbiage, or leader/follower, but only write > leader/follower. We need to then make 8_x work in the inverse way. The burden > here is not on that proposal or on the code in my view. Instead, the burden > is on the test plan. > If anyone has any guidance please share but here are my thoughts: > Case A: > Test the case where a user is running a standalone cluster in 8 with three > nodes but then updates one of the nodes. > Case B: > Test the case where a user is running a mixed cluster standalone cluster, and > the leader node is forced to fail and then is brought back. > Case C: > A SolrCloud cluster that has a mix of 8 and 9 nodes goes down during a > rolling upgrade and a follower needs to become leader. > I know haven't listed all possible scenarios or everything that could happen. > Please let me know if you have thoughts or guidance on how best to accomplish > this work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
madrob commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467104137 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -58,18 +58,18 @@ public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) { public Pair handleRequest() throws InterruptedException { if (!rateLimiterConfig.isEnabled) { - return new Pair(true, null); + return new Pair(true, new AcquiredSlotMetadata(null, null)); } if (guaranteedSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { - return new Pair(true, new AcquiredSlotMetadata(this, false)); + return new Pair(true, new AcquiredSlotMetadata(this, guaranteedSlotsPool)); } if (borrowableSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { - return new Pair(true, new AcquiredSlotMetadata(this, true)); + return new Pair(true, new AcquiredSlotMetadata(this, borrowableSlotsPool)); } -return new Pair(false, null); +return new Pair(false, new AcquiredSlotMetadata(null, null)); } Review comment: I would write this as something like: ``` class SemaphoreWrapper() { Semaphore wrapped; release() { if (wrapped != null) wrapped.release(); } } public SempahoreWrapper handleRequest() throws InterruptedException { if (!rateLimiterConfig.isEnabled) { return nopPool; // = new SemaphoreWrapper(null); } if (guaranteedSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { return new SemaphoreWrapper(guaranteedSlotsPool); } if (borrowableSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { return new SemaphoreWrapper(borrowableSlotsPool); } return null; } ``` And probably have the limiter own the wrappers and return the right thing each time instead of creating a new wrapper. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for placement plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r467108068 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PlacementException.java ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Exception thrown by a {@link PlacementPlugin} when it is unable to compute placement for whatever reason (except an Review comment: Skipping on this one for now. I believe we need to better identify what problems plugins will be running into (that are not Solr side exceptions bubbling up) and how the Solr side implementation could use such a reason before being able to make useful design decisions. If you do have specific use cases in mind, please let me know. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9449) Skip non-competitive documents when sort by _doc with search after
Mayya Sharipova created LUCENE-9449: --- Summary: Skip non-competitive documents when sort by _doc with search after Key: LUCENE-9449 URL: https://issues.apache.org/jira/browse/LUCENE-9449 Project: Lucene - Core Issue Type: Improvement Reporter: Mayya Sharipova Enhance DocComparator to provide an iterator over competitive documents when search ing with "after" FieldDoc. This iterator can quickly position on the desired "after" document, and skip all documents or even segments that contain documents before "after" This is especially efficient when "after" is high. Related to LUCENE-9280 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467110274 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -58,18 +58,18 @@ public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) { public Pair handleRequest() throws InterruptedException { if (!rateLimiterConfig.isEnabled) { - return new Pair(true, null); + return new Pair(true, new AcquiredSlotMetadata(null, null)); } if (guaranteedSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { - return new Pair(true, new AcquiredSlotMetadata(this, false)); + return new Pair(true, new AcquiredSlotMetadata(this, guaranteedSlotsPool)); } if (borrowableSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) { - return new Pair(true, new AcquiredSlotMetadata(this, true)); + return new Pair(true, new AcquiredSlotMetadata(this, borrowableSlotsPool)); } -return new Pair(false, null); +return new Pair(false, new AcquiredSlotMetadata(null, null)); } Review comment: We need the rate limiter to be a part of the returned value here so that the decrement can be invoked on the right RequestRateLimiter? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for placement plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r466860061 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PlacementPlugin.java ## @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Implemented by external plugins to control replica placement and movement on the search cluster (as well as other things + * such as cluster elasticity?) when cluster changes are required (initiated elsewhere, most likely following a Collection + * API call). + */ +public interface PlacementPlugin { Review comment: **Please ignore this comment**. I thought you were referring to how various plugins get selected for doing work (currently hard coded in `AssignStrategyFactory.create`) > The configuration part I didn't really do yet. Unclear to me how the `configure` method here would be used, since in order to get to it the plugin has to be loaded already... > I was thinking defining the plugin class or classes in some solr configuration file (with the rest of the config). At least a single default plugin implementation that would be used for all placement needs or a default + other ones ones (with names that can then be selected by callers of the Collection API passing a `placement` parameter as suggested in the changes to `CollectionAdminRequest.java`). > Are there similar examples in Solr code (loading plugins possibly on a per collection basis) that I can get inspiration from or reuse? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14716) Ref Guide: update leader/follower terminology
[ https://issues.apache.org/jira/browse/SOLR-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173227#comment-17173227 ] Cassandra Targett commented on SOLR-14716: -- Discussion on this in the Dev list: https://lists.apache.org/thread.html/rdb8ea68b733ab7a1696465ef3a0e5ad8e000c7159a2bf649ec86a3ff%40%3Cdev.lucene.apache.org%3E > Ref Guide: update leader/follower terminology > - > > Key: SOLR-14716 > URL: https://issues.apache.org/jira/browse/SOLR-14716 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Priority: Major > > The effort to remove oppressive terminology in SOLR-14702 led to somewhat > awkward phrasing on how to refer to non-SolrCloud configurations, > specifically "leader/follower mode", which is potentially very confusing > since SolrCloud also has leaders and one could consider replicas to be > followers. > I propose that we standardize what we call these two modes as "coordinated > mode" (SolrCloud) and "uncoordinated mode" (or "non-coordinated" if people > prefer). I chose this because in thinking about what really differentiates > the two approaches is the ZooKeeper coordination for requests, configs, etc. > There are other differences too, of course, but that's the biggest one that > stuck out to me as a key differentiator and applicable in the naming. > There are also places in the Ref Guide where we refer to "standalone mode", > which in many cases means "any cluster not running SolrCloud". This has > always been problematic, because the word "standalone" implies a single node, > but it's of course pretty much always been possible to have a cluster of > multiple nodes that don't run SolrCloud/ZK. This issue would address those > examples also. > Note that I'm not proposing replacing the word "SolrCloud" throughout the > documentation. Instead I'll augment the use of the word "SolrCloud" with > clarification that this term means "coordinated mode". Later if we ever > replace SolrCloud references in code and fully remove that name, the > conceptual groundwork will have already been laid for users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova opened a new pull request #1725: LUCENE-9449 Skip docs with _doc sort and "after"
mayya-sharipova opened a new pull request #1725: URL: https://github.com/apache/lucene-solr/pull/1725 Enhance DocComparator to provide an iterator over competitive documents when searching with "after" FieldDoc. This iterator can quickly position on the desired "after" document, and skip all documents before "after" or even whole segments that contain only documents before "after". Related to LUCENE-9280 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8626) standardise test class naming
[ https://issues.apache.org/jira/browse/LUCENE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173238#comment-17173238 ] Christine Poerschke commented on LUCENE-8626: - {quote}Thank you very much for kicking this effort off back in 2018. ... If you don't mind, I'd like to take this effort a bit further and to completion via a PR. {quote} No problem. I didn't and don't have the bandwidth to fully see this through to completion myself. My hopes for the ticket were and are simply for it be to a collaborative open space where multiple folks could contribute: * analysis of what we have now * thoughts on whether or not to standardise * thoughts on what to standardise to and why * independent incremental test renaming ** if now renaming a small number of classes helps remove active productivity detriments in a particular area of code then that could be worth doing independently ** if then later renaming of a larger number of classes (for the overall standardisation) renames a few classes back to their previous name then so be it * code snippets (groovy, shell script, algorithm ideas) on how standardisation might be enforced * other stuff > standardise test class naming > - > > Key: LUCENE-8626 > URL: https://issues.apache.org/jira/browse/LUCENE-8626 > Project: Lucene - Core > Issue Type: Test >Reporter: Christine Poerschke >Priority: Major > Attachments: SOLR-12939.01.patch, SOLR-12939.02.patch, > SOLR-12939.03.patch, SOLR-12939_hoss_validation_groovy_experiment.patch > > > This was mentioned and proposed on the dev mailing list. Starting this ticket > here to start to make it happen? > History: This ticket was created as > https://issues.apache.org/jira/browse/SOLR-12939 ticket and then got > JIRA-moved to become https://issues.apache.org/jira/browse/LUCENE-8626 ticket. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8626) standardise test class naming
[ https://issues.apache.org/jira/browse/LUCENE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173239#comment-17173239 ] Christine Poerschke commented on LUCENE-8626: - bq. Within the lucene test suite, it is nearly 90%. ... bq. Within the solr test suite, things are less consistent: ... What are people's thoughts on "lucene" and "solr" standardising to the same naming, or not? To me it seems "lucene" standardising on X and "solr" standardising on Y could be confusing but it would still be clearer than neither of them being standardised. > standardise test class naming > - > > Key: LUCENE-8626 > URL: https://issues.apache.org/jira/browse/LUCENE-8626 > Project: Lucene - Core > Issue Type: Test >Reporter: Christine Poerschke >Priority: Major > Attachments: SOLR-12939.01.patch, SOLR-12939.02.patch, > SOLR-12939.03.patch, SOLR-12939_hoss_validation_groovy_experiment.patch > > > This was mentioned and proposed on the dev mailing list. Starting this ticket > here to start to make it happen? > History: This ticket was created as > https://issues.apache.org/jira/browse/SOLR-12939 ticket and then got > JIRA-moved to become https://issues.apache.org/jira/browse/LUCENE-8626 ticket. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for placement plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r467127445 ## File path: solr/core/src/java/org/apache/solr/cluster/placement/PlacementPlugin.java ## @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cluster.placement; + +/** + * Implemented by external plugins to control replica placement and movement on the search cluster (as well as other things + * such as cluster elasticity?) when cluster changes are required (initiated elsewhere, most likely following a Collection + * API call). + */ +public interface PlacementPlugin { Review comment: I see a few options for passing config to the plugin: - Pass the config map in the `computePlacement` call along with the rest of the parameters - Provide in one of the passed factories the ability of the plugin code to call back into Solr and get the config - Add a `configure` call on the plugin as you suggest (then the Solr infra decides when to call this method) - Pass the config when the plugin class is instantiated. This might be equivalent to passing it in the `computePlacement` method if a new plugin class instance is created for each new computation. So the real choice is do we create a new plugin class instance per placement computation or reuse a given one? Creating a new one for each call is likely a simpler programming model for the plugin developer, it can use class member variables freely and if it wants to keep some state it has to make it static... With a new instance per computation, there would be no notion of "configuration update". A separate call into the plugin as you suggest or passing the config in `computePlacement` is equivalent (and the latter likely easier to handle in plugin code). I'm tempted to pass the config with each call to `computePlacement` (assuming the saving of not passing it when the plugin doesn't need it are non measurable). I'm also tempted to make it `Map` given it will (most likely?) come from XML and the plugin code would have to deal with what the config means anyway and what types to cast it to... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13499) Fix "Apache License, Version 2.0" spelling -- in pom.xml.template
[ https://issues.apache.org/jira/browse/SOLR-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173245#comment-17173245 ] ASF subversion and git services commented on SOLR-13499: Commit a96499e6af4b699e9ce9675e7af2e75dd5046900 in lucene-solr's branch refs/heads/master from Vincent Privat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a96499e ] SOLR-13499: Fix "Apache License, Version 2.0" spelling in in pom.xml.template (#674) > Fix "Apache License, Version 2.0" spelling -- in pom.xml.template > -- > > Key: SOLR-13499 > URL: https://issues.apache.org/jira/browse/SOLR-13499 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 8.1.1 >Reporter: Vincent Privat >Priority: Trivial > Labels: license, maven, pom > Time Spent: 20m > Remaining Estimate: 0h > > There are many Java libraries licensed under "Apache License, Version 2.0" > that do not use its official spelling. > This causes issues like https://issues.apache.org/jira/browse/MPIR-382: with > every library defining its own spelling, it's difficult in large projects to > have a clear view of all licenses in use. > Solr uses "Apache 2" instead of the official "Apache License, Version 2.0" > spelling. ..._in the pom.xml.template file used by the shadow maven build_ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke merged pull request #674: SOLR-13499: Fix "Apache License, Version 2.0" spelling in pom.xml.template
cpoerschke merged pull request #674: URL: https://github.com/apache/lucene-solr/pull/674 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on pull request #1602: SOLR-14582: Expose IWC.setMaxCommitMergeWaitMillis in Solr's index config
tflobbe commented on pull request #1602: URL: https://github.com/apache/lucene-solr/pull/1602#issuecomment-670593930 @dsmiley, looks good? should I merge? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13499) Fix "Apache License, Version 2.0" spelling -- in pom.xml.template
[ https://issues.apache.org/jira/browse/SOLR-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173249#comment-17173249 ] ASF subversion and git services commented on SOLR-13499: Commit e8d71d88ab32e41d41553016bf3b63c996e8dfee in lucene-solr's branch refs/heads/branch_8x from Vincent Privat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e8d71d8 ] SOLR-13499: Fix "Apache License, Version 2.0" spelling in in pom.xml.template (#674) > Fix "Apache License, Version 2.0" spelling -- in pom.xml.template > -- > > Key: SOLR-13499 > URL: https://issues.apache.org/jira/browse/SOLR-13499 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 8.1.1 >Reporter: Vincent Privat >Priority: Trivial > Labels: license, maven, pom > Time Spent: 20m > Remaining Estimate: 0h > > There are many Java libraries licensed under "Apache License, Version 2.0" > that do not use its official spelling. > This causes issues like https://issues.apache.org/jira/browse/MPIR-382: with > every library defining its own spelling, it's difficult in large projects to > have a clear view of all licenses in use. > Solr uses "Apache 2" instead of the official "Apache License, Version 2.0" > spelling. ..._in the pom.xml.template file used by the shadow maven build_ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on pull request #1723: SOLR prometheus: simplify concurrent collection
dsmiley commented on pull request #1723: URL: https://github.com/apache/lucene-solr/pull/1723#issuecomment-670598671 @ErickErickson the gradle validateLogCalls is complaining because I added a log statement without explicitly checking "log.isErrorEnabled" first. validate-log-calls.gradle is *not* supposed to do that for an error level log. Can you see what's going on please? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9379) Directory based approach for index encryption
[ https://issues.apache.org/jira/browse/LUCENE-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173265#comment-17173265 ] David Smiley commented on LUCENE-9379: -- Rajeswari -- you are referring to some SolrCloud concepts. The scenario you describe would _often_ co-locate your "tenants", and thus any OS or Lucene Directory or Codec levels simply +won't work+. For example if you had a field "name" that's indexed, then it's an index for all docs in that index, spanning your multiple "tenants". Instead, you could either create separate Collections, or have one Collection with "implicit" (really explicit) shard creation/naming for each tenant, but you'd have to be careful in all you do to query/index a specific shard instead of accidentally querying the whole. > Directory based approach for index encryption > - > > Key: LUCENE-9379 > URL: https://issues.apache.org/jira/browse/LUCENE-9379 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Bruno Roustant >Assignee: Bruno Roustant >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > +Important+: This Lucene Directory wrapper approach is to be considered only > if an OS level encryption is not possible. OS level encryption better fits > Lucene usage of OS cache, and thus is more performant. > But there are some use-case where OS level encryption is not possible. This > Jira issue was created to address those. > > > The goal is to provide optional encryption of the index, with a scope limited > to an encryptable Lucene Directory wrapper. > Encryption is at rest on disk, not in memory. > This simple approach should fit any Codec as it would be orthogonal, without > modifying APIs as much as possible. > Use a standard encryption method. Limit perf/memory impact as much as > possible. > Determine how callers provide encryption keys. They must not be stored on > disk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8626) standardise test class naming
[ https://issues.apache.org/jira/browse/LUCENE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173271#comment-17173271 ] Erick Erickson commented on LUCENE-8626: Both should be the same IMO. > standardise test class naming > - > > Key: LUCENE-8626 > URL: https://issues.apache.org/jira/browse/LUCENE-8626 > Project: Lucene - Core > Issue Type: Test >Reporter: Christine Poerschke >Priority: Major > Attachments: SOLR-12939.01.patch, SOLR-12939.02.patch, > SOLR-12939.03.patch, SOLR-12939_hoss_validation_groovy_experiment.patch > > > This was mentioned and proposed on the dev mailing list. Starting this ticket > here to start to make it happen? > History: This ticket was created as > https://issues.apache.org/jira/browse/SOLR-12939 ticket and then got > JIRA-moved to become https://issues.apache.org/jira/browse/LUCENE-8626 ticket. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13499) Fix "Apache License, Version 2.0" spelling -- in pom.xml.template
[ https://issues.apache.org/jira/browse/SOLR-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-13499. Fix Version/s: 8.7 master (9.0) Resolution: Fixed Thanks [~Don-vip]! > Fix "Apache License, Version 2.0" spelling -- in pom.xml.template > -- > > Key: SOLR-13499 > URL: https://issues.apache.org/jira/browse/SOLR-13499 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 8.1.1 >Reporter: Vincent Privat >Priority: Trivial > Labels: license, maven, pom > Fix For: master (9.0), 8.7 > > Time Spent: 20m > Remaining Estimate: 0h > > There are many Java libraries licensed under "Apache License, Version 2.0" > that do not use its official spelling. > This causes issues like https://issues.apache.org/jira/browse/MPIR-382: with > every library defining its own spelling, it's difficult in large projects to > have a clear view of all licenses in use. > Solr uses "Apache 2" instead of the official "Apache License, Version 2.0" > spelling. ..._in the pom.xml.template file used by the shadow maven build_ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] ErickErickson commented on pull request #1723: SOLR prometheus: simplify concurrent collection
ErickErickson commented on pull request #1723: URL: https://github.com/apache/lucene-solr/pull/1723#issuecomment-670608605 The first part of the line gives the most explicit statement: "cause: 'getMessage or getCause in log line’”, the latter part of the line is more generic. This check was added as part of SOLR-14523, since these two calls were often used in places that provided too little information. If you think this is an exception, you can add //logok to the line and it will pass. Erick > On Aug 7, 2020, at 12:23 PM, David Smiley wrote: > > > @ErickErickson the gradle validateLogCalls is complaining because I added a log statement without explicitly checking "log.isErrorEnabled" first. validate-log-calls.gradle is not supposed to do that for an error level log. Can you see what's going on please? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] HoustonPutman commented on pull request #1716: SOLR-14706: Fix support for default autoscaling policy
HoustonPutman commented on pull request #1716: URL: https://github.com/apache/lucene-solr/pull/1716#issuecomment-670612283 Tested this out locally upgrading 8.6.0 -> 8.6.1 RC1 (failure) -> this PR (success) Also added you to the CHANGES.txt Gus, thanks for all the invaluable help! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#issuecomment-670616622 @madrob Updated, please see and let me know your thoughts. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] ErickErickson commented on pull request #1723: SOLR prometheus: simplify concurrent collection
ErickErickson commented on pull request #1723: URL: https://github.com/apache/lucene-solr/pull/1723#issuecomment-670617417 If you only knew how many times I’ve beaten my head against a wall for _hours_, then read something that was in front of my face all the time more carefully… ;) > On Aug 7, 2020, at 1:04 PM, David Smiley wrote: > > > Woops; sorry; I should have looked more clearly! This is a "logok" situation I should do. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14702) Remove Master and Slave from Code Base and Docs
[ https://issues.apache.org/jira/browse/SOLR-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173321#comment-17173321 ] ASF subversion and git services commented on SOLR-14702: Commit 2bf092b8ddc6cf8aadf727a3e30bd8e93ccb59c4 in lucene-solr's branch refs/heads/master from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2bf092b ] SOLR-14702: Add Upgrade Notes and CHANGES entry (#1718) > Remove Master and Slave from Code Base and Docs > --- > > Key: SOLR-14702 > URL: https://issues.apache.org/jira/browse/SOLR-14702 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Marcus Eagan >Priority: Critical > Attachments: SOLR-14742-testfix.patch > > Time Spent: 16.5h > Remaining Estimate: 0h > > Every time I read _master_ and _slave_, I get pissed. > I think about the last and only time I remember visiting my maternal great > grandpa in Alabama at four years old. He was a sharecropper before WWI, where > he lost his legs, and then he was back to being a sharecropper somehow after > the war. Crazy, I know. I don't know if the world still called his job > sharecropping in 1993, but he was basically a slave—in America. He lived in > the same shack that his father, and his grandfather (born a slave) lived in > down in Alabama. Believe it or not, my dad's (born in 1926) grandfather was > actually born a slave, freed shortly after birth by his owner father. I never > met him, though. He died in the 40s. > Anyway, I cannot police all terms in the repo and do not wish to. This > master/slave shit is archaic and misleading on technical grounds. Thankfully, > there's only a handful of files in code and documentation that still talk > about masters and slaves. We should replace all of them. > There are so many ways to reword it. In fact, unless anyone else objects or > wants to do the grunt work to help my stress levels, I will open the pull > request myself in effort to make this project and community more inviting to > people of all backgrounds and histories. We can have leader/follower, or > primary/secondary, but none of this Master/Slave nonsense. I'm sick of the > garbage. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe merged pull request #1718: SOLR-14702: Add Upgrade Notes and CHANGES entry
tflobbe merged pull request #1718: URL: https://github.com/apache/lucene-solr/pull/1718 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on pull request #1718: SOLR-14702: Add Upgrade Notes and CHANGES entry
atris commented on pull request #1718: URL: https://github.com/apache/lucene-solr/pull/1718#issuecomment-670619396 I took a look late -- but +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
madrob commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467165632 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -80,20 +79,12 @@ public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) { * * @lucene.experimental -- Can cause slots to be blocked if a request borrows a slot and is itself long lived. */ - public Pair allowSlotBorrowing() throws InterruptedException { + public SlotMetadata allowSlotBorrowing() throws InterruptedException { Review comment: javadoc on this is out of date, still refers to booleans ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -55,21 +54,21 @@ public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) { * NOTE: Always check for a null metadata object even if this method returns a true -- this will be the scenario when * rate limiters are not enabled. * */ - public Pair handleRequest() throws InterruptedException { + public SlotMetadata handleRequest() throws InterruptedException { if (!rateLimiterConfig.isEnabled) { - return new Pair(true, new AcquiredSlotMetadata(null, null)); + return new SlotMetadata(null); Review comment: I'd prefer to see these as pre declared members so that we're not allocating a new metadata on each request. ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -152,14 +143,22 @@ public RateLimiterConfig(SolrRequest.SolrRequestType requestType, boolean isEnab } } - // Represents the metadata for an acquired slot - static class AcquiredSlotMetadata { -public RequestRateLimiter requestRateLimiter; -public Semaphore usedPool; + // Represents the metadata for a slot + static class SlotMetadata { +private Semaphore usedPool; -public AcquiredSlotMetadata(RequestRateLimiter requestRateLimiter, Semaphore usedPool) { - this.requestRateLimiter = requestRateLimiter; +public SlotMetadata(Semaphore usedPool) { this.usedPool = usedPool; } + +public void decrementRequest() { + if (usedPool != null) { +usedPool.release(); + } +} + +public boolean isUsedPoolNull() { Review comment: This seems like an implementation detail that we don't need to expose? ## File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java ## @@ -132,12 +130,12 @@ public boolean handleRequest(HttpServletRequest request) throws InterruptedExcep Thread.currentThread().interrupt(); } -if (result != null && result.first()) { - if (result.second() == null) { -throw new IllegalStateException("AcquiredSlotMetadata object null even when slot is acquired and rate limiters are enabled"); - } +if (result == null) { Review comment: This should never happen, right? ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -152,14 +143,22 @@ public RateLimiterConfig(SolrRequest.SolrRequestType requestType, boolean isEnab } } - // Represents the metadata for an acquired slot - static class AcquiredSlotMetadata { -public RequestRateLimiter requestRateLimiter; -public Semaphore usedPool; + // Represents the metadata for a slot + static class SlotMetadata { Review comment: 👍 ## File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java ## @@ -147,19 +145,16 @@ public boolean handleRequest(HttpServletRequest request) throws InterruptedExcep // Decrement the active requests in the rate limiter for the corresponding request type. public void decrementActiveRequests(HttpServletRequest request) { -RequestRateLimiter.AcquiredSlotMetadata acquiredSlotMetadata = activeRequestsMap.get(request); +RequestRateLimiter.SlotMetadata slotMetadata = activeRequestsMap.get(request); -if (acquiredSlotMetadata == null) { +if (slotMetadata == null) { Review comment: I think I would flip this condition so that we only have one return point from the method. ## File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467176662 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -152,14 +143,22 @@ public RateLimiterConfig(SolrRequest.SolrRequestType requestType, boolean isEnab } } - // Represents the metadata for an acquired slot - static class AcquiredSlotMetadata { -public RequestRateLimiter requestRateLimiter; -public Semaphore usedPool; + // Represents the metadata for a slot + static class SlotMetadata { +private Semaphore usedPool; -public AcquiredSlotMetadata(RequestRateLimiter requestRateLimiter, Semaphore usedPool) { - this.requestRateLimiter = requestRateLimiter; +public SlotMetadata(Semaphore usedPool) { this.usedPool = usedPool; } + +public void decrementRequest() { + if (usedPool != null) { +usedPool.release(); + } +} + +public boolean isUsedPoolNull() { Review comment: We need this to avoid caching a request when request rate limiter is disabled. Renaming the method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467178250 ## File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java ## @@ -132,12 +130,12 @@ public boolean handleRequest(HttpServletRequest request) throws InterruptedExcep Thread.currentThread().interrupt(); } -if (result != null && result.first()) { - if (result.second() == null) { -throw new IllegalStateException("AcquiredSlotMetadata object null even when slot is acquired and rate limiters are enabled"); - } +if (result == null) { Review comment: Yes, thats why the check. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe merged pull request #1602: SOLR-14582: Expose IWC.setMaxCommitMergeWaitMillis in Solr's index config
tflobbe merged pull request #1602: URL: https://github.com/apache/lucene-solr/pull/1602 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14582) Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index config
[ https://issues.apache.org/jira/browse/SOLR-14582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173386#comment-17173386 ] ASF subversion and git services commented on SOLR-14582: Commit e6275d9970702c250b69792b5491ef9d94b08638 in lucene-solr's branch refs/heads/master from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e6275d9 ] SOLR-14582: Expose IWC.setMaxCommitMergeWaitMillis in Solr's index config (#1602) > Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index > config > -- > > Key: SOLR-14582 > URL: https://issues.apache.org/jira/browse/SOLR-14582 > Project: Solr > Issue Type: Improvement >Reporter: Tomas Eduardo Fernandez Lobbe >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Trivial > Time Spent: 2h > Remaining Estimate: 0h > > LUCENE-8962 added the ability to merge segments synchronously on commit. This > isn't done by default and the default {{MergePolicy}} won't do it, but custom > merge policies can take advantage of this. Solr allows plugging in custom > merge policies, so if someone wants to make use of this feature they could, > however, they need to set {{IndexWriterConfig.maxCommitMergeWaitSeconds}} to > something greater than 0. > Since this is an expert feature, I plan to document it only in javadoc and > not the ref guide. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14582) Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index config
[ https://issues.apache.org/jira/browse/SOLR-14582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173387#comment-17173387 ] ASF subversion and git services commented on SOLR-14582: Commit 1b4a905115783d53288d43faf5867a2aa945dfd9 in lucene-solr's branch refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1b4a905 ] SOLR-14582: Expose IWC.setMaxCommitMergeWaitMillis in Solr's index config (#1602) > Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index > config > -- > > Key: SOLR-14582 > URL: https://issues.apache.org/jira/browse/SOLR-14582 > Project: Solr > Issue Type: Improvement >Reporter: Tomas Eduardo Fernandez Lobbe >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Trivial > Time Spent: 2h > Remaining Estimate: 0h > > LUCENE-8962 added the ability to merge segments synchronously on commit. This > isn't done by default and the default {{MergePolicy}} won't do it, but custom > merge policies can take advantage of this. Solr allows plugging in custom > merge policies, so if someone wants to make use of this feature they could, > however, they need to set {{IndexWriterConfig.maxCommitMergeWaitSeconds}} to > something greater than 0. > Since this is an expert feature, I plan to document it only in javadoc and > not the ref guide. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14582) Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index config
[ https://issues.apache.org/jira/browse/SOLR-14582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomas Eduardo Fernandez Lobbe resolved SOLR-14582. -- Fix Version/s: 8.7 master (9.0) Resolution: Fixed > Expose IWC.setMaxCommitMergeWaitMillis as an expert feature in Solr's index > config > -- > > Key: SOLR-14582 > URL: https://issues.apache.org/jira/browse/SOLR-14582 > Project: Solr > Issue Type: Improvement >Reporter: Tomas Eduardo Fernandez Lobbe >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Trivial > Fix For: master (9.0), 8.7 > > Time Spent: 2h > Remaining Estimate: 0h > > LUCENE-8962 added the ability to merge segments synchronously on commit. This > isn't done by default and the default {{MergePolicy}} won't do it, but custom > merge policies can take advantage of this. Solr allows plugging in custom > merge policies, so if someone wants to make use of this feature they could, > however, they need to set {{IndexWriterConfig.maxCommitMergeWaitSeconds}} to > something greater than 0. > Since this is an expert feature, I plan to document it only in javadoc and > not the ref guide. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
madrob commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467213763 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -40,32 +42,35 @@ private final Semaphore borrowableSlotsPool; private final RateLimiterConfig rateLimiterConfig; + private final SlotMetadata guaranteedSlotMetadata; + private final SlotMetadata borrowedSlotMetadata; + private final SlotMetadata nullSlotMetadata; Review comment: can this be static? pretty minor but since it doesn't do anything, it can be shared by all the instances and save some memory ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -157,7 +162,7 @@ public void decrementRequest() { } } -public boolean isUsedPoolNull() { +public boolean isReleasable() { return usedPool == null; Review comment: Should be != null, right? Check the condition wherever this is called too, might need to invert those. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on a change in pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r467219496 ## File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java ## @@ -157,7 +162,7 @@ public void decrementRequest() { } } -public boolean isUsedPoolNull() { +public boolean isReleasable() { return usedPool == null; Review comment: Fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14722) timeAllowed should track from the very start of the request
David Smiley created SOLR-14722: --- Summary: timeAllowed should track from the very start of the request Key: SOLR-14722 URL: https://issues.apache.org/jira/browse/SOLR-14722 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: David Smiley Assignee: David Smiley The "timeAllowed" param starts tracking time from the moment {{SolrQueryTimeoutImpl.set}} is called. I think it ought to track from when the request actually started, which is {{SolrQueryRequest.getRequestTimer}}. Lazy core loading can substantially increase the delta. Additionally, I'd like to make some small improvements to SolrQueryTimeoutImpl to make it easier to use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14722) timeAllowed should track from the very start of the request
[ https://issues.apache.org/jira/browse/SOLR-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173435#comment-17173435 ] Atri Sharma commented on SOLR-14722: Huge +1. > timeAllowed should track from the very start of the request > --- > > Key: SOLR-14722 > URL: https://issues.apache.org/jira/browse/SOLR-14722 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > > The "timeAllowed" param starts tracking time from the moment > {{SolrQueryTimeoutImpl.set}} is called. I think it ought to track from when > the request actually started, which is {{SolrQueryRequest.getRequestTimer}}. > Lazy core loading can substantially increase the delta. Additionally, I'd > like to make some small improvements to SolrQueryTimeoutImpl to make it > easier to use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris merged pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris merged pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13528) Rate limiting in Solr
[ https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173441#comment-17173441 ] ASF subversion and git services commented on SOLR-13528: Commit a074418da0d03b6beff2ca4199660c04f6348dfb in lucene-solr's branch refs/heads/master from Atri Sharma [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a074418 ] SOLR-13528: Implement Request Rate Limiters (#1686) This commit introduces two functionalities: request rate limiting and ability to identify requests based on type (indexing, search, admin). The default rate limiter rate limits query requests based on configurable parameters which can be set in web.xml. Note that this rate limiting works at a JVM level, not a core/collection level. > Rate limiting in Solr > - > > Key: SOLR-13528 > URL: https://issues.apache.org/jira/browse/SOLR-13528 > Project: Solr > Issue Type: New Feature >Reporter: Anshum Gupta >Assignee: Atri Sharma >Priority: Major > Time Spent: 9.5h > Remaining Estimate: 0h > > In relation to SOLR-13527, Solr also needs a way to throttle update and > search requests based on usage metrics. This is the umbrella JIRA for both > update and search rate limiting. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on pull request #1686: SOLR-13528: Implement Request Rate Limiters
atris commented on pull request #1686: URL: https://github.com/apache/lucene-solr/pull/1686#issuecomment-670692047 Thank you @madrob and @anshumg ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14630) CloudSolrClient doesn't pick correct core when server contains more shards
[ https://issues.apache.org/jira/browse/SOLR-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173449#comment-17173449 ] David Smiley commented on SOLR-14630: - It'd be great if we could make an integration test (_not_ a unit test) for this somehow. Off-hand, I'm not sure what an integration test would look for to observe when a request re-routes. > CloudSolrClient doesn't pick correct core when server contains more shards > -- > > Key: SOLR-14630 > URL: https://issues.apache.org/jira/browse/SOLR-14630 > Project: Solr > Issue Type: Bug > Components: SolrCloud, SolrJ >Affects Versions: 8.5.1, 8.5.2 >Reporter: Ivan Djurasevic >Priority: Major > > Precondition: create collection with 4 shards on one server. > During search and update, solr cloud client picks wrong core even _route_ > exists in query param. In BaseSolrClient class, method sendRequest, > > {code:java} > sortedReplicas.forEach( replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(ZkCoreNodeProps.getCoreUrl(replica.getBaseUrl(), > joinedInputCollections)); > } > }); > {code} > > Previous part of code adds base url(localhost:8983/solr/collection_name) to > theUrlList, it doesn't create core address(localhost:8983/solr/core_name). If > we change previous code to: > {quote} > {code:java} > sortedReplicas.forEach(replica -> { > if (seenNodes.add(replica.getNodeName())) { > theUrlList.add(replica.getCoreUrl()); > } > });{code} > {quote} > Solr cloud client picks core which is defined with _route_ parameter. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org