Hi all,
We are using solr 7.7.2 . After optimization the deleted docs count is
still showing as part of max docs.
As per my knowledge after optimization max docs and num docs count should
match. It is not happening here.. Is there any way to troubleshoot this.
Suppose I have two phone numbers P1 and P2 and the number of records with
P1 are X and with P2 are 2X (2 times X) respectively. If I query for R rows
for P1 and P2, the QTime in case of P2 is more. I am not specifying any
sort parameter and the number of rows I'm asking for is same in both the
case
Hi
Is it good idea to create 10 dynamic fields of time pint in solr?
I have that many fields to search on actually which come upon based on
users.
Thanks in advance!
And I'm using Solr Cloud in real-time.
Regards,
Sai Vignan M
Why so many shards?
> On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
>
> We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
> RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
> JVM size of 18gb, with G1 GC enabled. About 92 collection w
We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
JVM size of 18gb, with G1 GC enabled. About 92 collection with average of 8
shards and 2 replica each. Most of updates over daily batch updates.
Whil
Hi Jan,
Could you advise more detail in step by step how we setup between Solr &
Zookeeper?
Do we need to put jetty-all.jar of Zookeeper 3.5.5 to the classpath of Solr?
or it has been taken care?
What's the configuration we need for SSL-enabled Zookeeper to communicate
with Solr?
--
Sent from
On 5/10/2020 4:48 PM, Ganesh Sethuraman wrote:
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow response times again. Note
that we don't update th
Here is a quick update based on your question, and few additional
information that will help
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow respo
Not documented yet, see https://issues.apache.org/jira/browse/SOLR-7893
Jan Høydahl
> 10. mai 2020 kl. 12:48 skrev ChienHuaWang :
>
> Hi,
> Does anyone figure how to handle the TLS between Solr and Zookeeper?
> From Solr doc, I only find the Solr internal communication, no specific
> detail abo
Choose whichever example is closest to what you want to do. Then strip it down
removing everything you don’t use. Note that _default configset has schema
guessing enabled which you don’t want in production.
Jan Høydahl
> 9. mai 2020 kl. 22:34 skrev Steven White :
>
> Hi everyone,
>
> There a
I wrote some Python for updating a collection config. An optional part of that
is to go to each replica and start a suggester build.
If your collection is sharded and you load from a dictionary, you’ll also need
to add distrib=false to the queries, otherwise you’ll get suggest results from
ever
Do not, repeat NOT expungeDelete after each deleteByQuery, it is
a very expensive operation. Perhaps after the nightly batch, but
I doubt that’ll help much anyway.
30% deleted docs is quite normal, and should definitely not
change the response time by a factor of 100! So there’s
some other issue
Hi,
Does anyone figure how to handle the TLS between Solr and Zookeeper?
>From Solr doc, I only find the Solr internal communication, no specific
detail about how zookeeper should be handled.
Appreciate any suggestion, thanks.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.
13 matches
Mail list logo