to pursue much. See:
> >
> > https://lucidworks.com/post/really-batch-updates-solr-2/
> >
> > If you’re looking at trying to maximize throughput, adding
> > client threads that send Solr documents is a better approach.
> >
> > All that said, I usually just
Hi,
Could someone help me with the best way to go about determining the maximum
number of docs I can send in a single update call to Solr in a master /
slave architecture.
Thanks!
Hi,
I want to use the Solr query elevation component. Let's say I want to
elevate "doc_id" when a user inputs the query "qwerty". I am able to get a
prototype to work by filling these values in elevate.xml and hitting the
Solr API with q="qwerty".
However, in our service, where I want to plug thi
Hi,
Is there a way to analyze how multiple values in a multivalued field are
being tokenized and processed during indexing?
The "Analysis" page on the UI assumes that my multiple comma-separated
values is a single value. It filters out the comma and acts as if it's a
single value that I specified
Hi,
If the number of cores spanned is low, I guess firing parallel queries and
taking union or intersection should work since their schema is the same. Do
you notice any perceivable difference in performance?
Best,
Sidharth
On Fri, Aug 9, 2019 at 2:54 PM Komal Motwani
wrote:
> Hi,
>
>
>
> I ha
ade in solrconfig.xml to
> define , you’ll be using managed-schema, not schema.xml.
>
> Best,
> Erick
>
> > On Jul 23, 2019, at 5:51 AM, Sidharth Negi
> wrote:
> >
> > Hi,
> >
> > The "replicateNow" button in the admin UI doesn't seem to wor
Hi,
The "replicateNow" button in the admin UI doesn't seem to work since the
"schema.xml" (which I modified on slave) is not being updated to reflect
that of the master. I have used this button before and it has always cloned
index right away. Any ideas on what could be the possible reason for thi
s for scores, eg. sqrt(q1)
+ sqrt(q2) + 0.6*q3.
On Wed, Apr 17, 2019 at 6:20 PM Sidharth Negi
wrote:
> This does indeed reduce the time. but doesn't quite do what I wanted. This
> approach penalizes the docs based on "coord" factor. In other words, for a
> doc with sc
This does indeed reduce the time. but doesn't quite do what I wanted. This
approach penalizes the docs based on "coord" factor. In other words, for a
doc with scores=5 on just one query (and nothing on others), the resulting
score would now be 5/3 since only one clause matches.
1. I wonder why doe
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 are edismax queries.
The QTime retur
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 are edismax queries.
The QTime retur
11 matches
Mail list logo