Thanks for reply.
>>The recovery is probably _caused_ by the node not responding to the update
request due to a timeout
Can we increase update request timeout?
>>What kind of documents are you indexing? I have seen situations where massive
documents take so long that the request times out and sta
Restore will only create the same number of shards as the original collection
had when you took the backup.
If you are on a cluster with enough resources, you can try split shards to the
desired numbers later on?
Split Shards has a more efficient implementation in solr 8.x but if u have a
mostly
Whoa:
Starting embedded zookeeper? Almost all production systems have an
external Zookeeper ensemble. Did your old system connect to an external
Zookeeper? You really need to solve that problem first. My guess is you’ve
(perhaps inadvertently) changed how Solr is started. You have to straignten
th
Not sure where the Docker image came from, but according to:
https://issues.apache.org/jira/browse/SOLR-13818
Jackson was upgraded to 2.10.0 in Solr 8.4.
> On Jul 21, 2020, at 2:59 PM, Man with No Name
> wrote:
>
> Hey Guys,
> Our team is using Solr 8.4.1 in a kubernetes cluster using the publ
Yep, that assumes you can afford it to be 5 minutes between the time you send a
doc to Solr and your users can search it.
Part of it depends on what your indexing rate is. If you’re only sending docs
occasionally, you may want to make that longer. Frankly, though, the interval
there isn’t too i
Hi.
We're LTR and after switching to multiple shards we found that rerank
happens on individual shards and during the merge phase the first pass
score isn't used. Currently our LTR model doesn't use textual match and
assumes that reranked documents are already more or less good in terms of
textual
Eric
Thanks for your quick response.
So in the solrconfig.xml keep the out of the box setting of 15 seconds
15000
false
and also have the setting set to something like 5 minutes
30
Then in the existing SolrJ code just simply delete the line
up.
Hi,
I'm using Solr-7.4.0 and I want to export 4TB of data from our current Solr
cluster to a different cluster. The new cluster has twice the number of
nodes than the current cluster and I want data to be distributed among all
the nodes. Is this possible with the Backup/Restore feature considering
Hey Guys,
Our team is using Solr 8.4.1 in a kubernetes cluster using the public image
from docker hub. The containers before getting deployed to the cluster
get whitescanned and it lists all the CVEs in the container. This is list
of CVE we have for Solr
CVE-2020-11619, CVE-2020-11620, CVE-2020-88
This was very helpful for diagnostics. It is choking on Zookeeper, can't
start it.
Where are the index files written? I assume if they are artifacts from the
previous install, they must be outside the current solr directory, but I
can't find them.
2020-07-21 17:23:09.244 INFO (main) [ ] o.a.s.
Hello Everyone,
I just downloaded Sitecore 9.3.0 and installed Solr using the JSON file
that Sitecore provided. The installation was seamless and Solr was working
as expected. But when I checked the logs I am getting this warning .I am
attaching solr logs as well for your reference.
o.e.j.u.s.S.c
What you’re seeing is the input from the client. You’ve passed true, true,
which are waitFlush and waitSearcher
which sets softCommit _for that call_ to false. It has nothing to do with the
settings in your config file.
bq. I am not passing the parameter to do a softCommit in the SolrJ command.
I am using Solr 8.5 cloud, and in my collection I have edited the
solrconfig.xml file to use
1000
and commented out the default configuration
We are using SolrJ to post files to the Solr here is the snippet of Java
code that does it
try(HttpSolrClient solrClient = solr.build
Upgrade to 6.6.2. That will be compatible, but will fix several bugs that were
discovered during the 6.x releases.
If the problem happens after that, ask again. It might, we’ve had some issues
with 6.6.2, but upgrade first.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.
The recovery is probably _caused_ by the node not responding to the update
request due to a timeout. The JIRA you reference is unrelated I’d guess.
What kind of documents are you indexing? I have seen situations where massive
documents take so long that the request times out and starts this proces
If you indexed any data and did _not_ delete the indexes (the data directories)
then you may well have index files written with 8x, which 7x is incapable of
dealing with. That’s the error I’d look for first in the solr log files that
Colvin suggested.
Best,
Erick
> On Jul 21, 2020, at 4:14 AM, C
I get the following exception when accessing the Cloud -> ZK Status page in
Solr 8.6.0:
ERROR [20200721T080439,063] qtp478489615-24
org.apache.solr.servlet.HttpSolrCall -
null:java.lang.NumberFormatException: null
at java.base/java.lang.Integer.parseInt(Integer.java:620)
at java.
Is there any jira task for solr to work on the issue related to usage of
multiple GraphFilter factories in one analysis chain?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Jira created
> Am 21.07.2020 um 10:28 schrieb Ishan Chattopadhyaya
> :
>
> I think this warrants a JIRA. To work around this issue for now, you can
> use an environment variable SOLR_SECURITY_MANAGER_ENABLED=false before
> starting Solr.
>
>> On Thu, Jul 16, 2020 at 11:58 PM Jörn Franke wrot
I think this warrants a JIRA. To work around this issue for now, you can
use an environment variable SOLR_SECURITY_MANAGER_ENABLED=false before
starting Solr.
On Thu, Jul 16, 2020 at 11:58 PM Jörn Franke wrote:
> The solution would be probably a policy file shipped with Solr that allows
> the ZK
Hi,
When you say you uninstalled 8.x what exactly does that mean? That you
deleted the directory of the binary *and* the solr home where the index
data was stored?
Either way, check your logs (solr/server/logs in the location you extracted
the binary by default) and you will see the exception tha
21 matches
Mail list logo