Well, in a way, QTime can depend on the total number of terms existing in
the core.
It would have been better if you had posted sample query and analysis
chain.
On Mon, 11 May 2020 at 11:45, Anshuman Singh
wrote:
> Suppose I have two phone numbers P1 and P2 and the number of records with
> P1 a
Suppose I have two phone numbers P1 and P2 and the number of records with
P1 are X and with P2 are 2X (2 times X) respectively. If I query for R rows
for P1 and P2, the QTime in case of P2 is more. I am not specifying any
sort parameter and the number of rows I'm asking for is same in both the
case
Hi Kshitij,
Query time depends on query parameters, number of docs matched,
collection size, index size on disk, resources available and caches.
Number of fields per doc will results in index being bigger on disk, but
assuming there are enough resources - mainly RAM for OS caches - that
shou
Hi,
I am having 120 fields in a single document and i am indexing all of them
i.e. index=true and stored=true in my schema.
I need to understand how that might be affecting my query time overall.
what is the relation between query time with respect to indexing all fields
in schema??
Regards,
Ks
gt;
> - Original Message
> From: neil22 <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Monday, April 14, 2008 5:00:05 PM
> Subject: solr query time
>
>
> It seems that response time to a query is linear with the size of the
> result
>
-- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: neil22 <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Monday, April 14, 2008 5:00:05 PM
Subject: solr query time
It seems that response time to a query is linear with the size of the result
set eve
documents that have "feature1" all with the same score
- query time = 30 seconds to get first 10 hits.
Is there any optimization I can do so that query time for first 10 is
constant regardless of result set size?
--
View this message in context:
http://www.nabble