Re: QTime lesser for facet.limit=-1 than facet.limit=5000/10000

2020-07-08 Thread Mikhail Khludnev
Which is more optimized: facet.limit=-1 OR facet.limit=1/5/4? > For a high Cardinality string field, with no cache enabled, no docValues > enabled, after every RELOAD on Solr admin UI for each query with different > facet.limit, why the QTime for "facet.limit=-1" is

QTime lesser for facet.limit=-1 than facet.limit=5000/10000

2020-07-08 Thread ana
Hi Team, Which is more optimized: facet.limit=-1 OR facet.limit=1/5/4? For a high Cardinality string field, with no cache enabled, no docValues enabled, after every RELOAD on Solr admin UI for each query with different facet.limit, why the QTime for "facet.limit=-1" is

Re: QTime

2019-07-12 Thread Edward Ribeiro
gt; Wouldn't be the case of using &rows=0 parameter on those requests? Wdyt? > > > > Edward > > > > Em qui, 11 de jul de 2019 14:24, Erick Erickson > > > escreveu: > > > >> Not only does Qtime not include network latency, it also doesn&#x

Re: QTime

2019-07-11 Thread Erick Erickson
true, although there’s still network that can’t be included. > On Jul 11, 2019, at 5:55 PM, Edward Ribeiro wrote: > > Wouldn't be the case of using &rows=0 parameter on those requests? Wdyt? > > Edward > > Em qui, 11 de jul de 2019 14:24, Erick Erickson > es

Re: QTime

2019-07-11 Thread Edward Ribeiro
Wouldn't be the case of using &rows=0 parameter on those requests? Wdyt? Edward Em qui, 11 de jul de 2019 14:24, Erick Erickson escreveu: > Not only does Qtime not include network latency, it also doesn't include > the time it takes to assemble the docs for return, which c

Re: QTime

2019-07-11 Thread Erick Erickson
Not only does Qtime not include network latency, it also doesn't include the time it takes to assemble the docs for return, which can be lengthy when rows is large.. On Wed, Jul 10, 2019, 14:39 Shawn Heisey wrote: > On 7/10/2019 3:17 PM, Lucky Sharma wrote: > > I am seeing

Re: QTime

2019-07-10 Thread Shawn Heisey
On 7/10/2019 3:17 PM, Lucky Sharma wrote: I am seeing one very weird behaviour of QTime of SOLR. Scenario is : When I am hitting the Solr Cloud Instance, situated at a DC with my local machine while load test I was seeing 400ms Qtime response and 1sec Http Response time. How much data was in

QTime

2019-07-10 Thread Lucky Sharma
Hi all, I am seeing one very weird behaviour of QTime of SOLR. Scenario is : When I am hitting the Solr Cloud Instance, situated at a DC with my local machine while load test I was seeing 400ms Qtime response and 1sec Http Response time. While I am trying to do the same process within the same

Re: Breaking down QTime for debugging performance

2017-08-16 Thread Erick Erickson
Try adding &shards.info=true to the URL. On Tue, Aug 15, 2017 at 5:18 PM, Nawab Zada Asad Iqbal wrote: > Hi all > > For a given solr host and shard, is there any way to get a breakdown on > QTime to see where is the time being spent? > > > Thanks > Nawab

Breaking down QTime for debugging performance

2017-08-15 Thread Nawab Zada Asad Iqbal
Hi all For a given solr host and shard, is there any way to get a breakdown on QTime to see where is the time being spent? Thanks Nawab

Re: Field collapsing, facets, and qtime: caching issue?

2017-02-13 Thread Joel Bernstein
ld > give me the expectation that by enabling facets with facet=true, the facet > component would need to do additional work and so the qTime cost would be > paid by that component. Here is the debug I get for repeated hits against > /default?indent=on&q=*:*&wt=json&

Re: Field collapsing, facets, and qtime: caching issue?

2017-02-13 Thread ronbraun
give me the expectation that by enabling facets with facet=true, the facet component would need to do additional work and so the qTime cost would be paid by that component. Here is the debug I get for repeated hits against /default?indent=on&q=*:*&wt=json&fq={!collapse+field=groupid

Re: Field collapsing, facets, and qtime: caching issue?

2017-02-10 Thread Joel Bernstein
cit > > > > The first query runs about 600ms, then subsequent repeats of the same query > are 0-5ms for qTime, which I interpret to mean that the query is cached > after the first hit. All as expected. > > However, if I enable facets without actually requesti

Field collapsing, facets, and qtime: caching issue?

2017-02-10 Thread Ronald K. Braun
first query runs about 600ms, then subsequent repeats of the same query are 0-5ms for qTime, which I interpret to mean that the query is cached after the first hit. All as expected. However, if I enable facets without actually requesting a facet: /default?indent=on&q=*:*&wt=json&fq={!

Re: Update Speed: QTime 1,000 - 5,000

2016-04-06 Thread Erick Erickson
you can mitigate the impact of throwing away caches on soft commits by doing appropriate autowarming, both the newSearcher and cache settings in solrconfig.xml. Be aware that you don't want to go overboard here, I'd start with 20 or so as the autowarm counts for queryResultCache and filterCache.

Re: Update Speed: QTime 1,000 - 5,000

2016-04-06 Thread Alessandro Benedetti
On Wed, Apr 6, 2016 at 7:53 AM, Robert Brown wrote: > The QTime's are from the updates. > > We don't have the resource right now to switch to SolrJ, but I would > assume only sending updates to the leaders would take some redirects out of > the process, How do you route your documents now ? Aren

Re: Update Speed: QTime 1,000 - 5,000

2016-04-05 Thread Robert Brown
The QTime's are from the updates. We don't have the resource right now to switch to SolrJ, but I would assume only sending updates to the leaders would take some redirects out of the process, I can regularly query for the collection status to know who's who. I'm now more interested in the ca

Re: Update Speed: QTime 1,000 - 5,000

2016-04-05 Thread Erick Erickson
bq: Apart from the obvious delay, I'm also seeing QTime's of 1,000 to 5,000 QTimes for what? The update? Queries? If for queries, autowarming may help, especially as your soft commit is throwing away all the top-level caches (i.e. the ones configured in solrconfig.xml) every minute. It shouldn

Re: Update Speed: QTime 1,000 - 5,000

2016-04-05 Thread John Bickerstaff
A few thoughts... >From a black-box testing perspective, you might try changing that softCommit time frame to something longer and see if it makes a difference. The size of your documents will make a difference too - so the comparison to 300 - 500 on other cloud setups may or may not be compari

Update Speed: QTime 1,000 - 5,000

2016-04-05 Thread Robert Brown
Hi, I'm currently posting updates via cURL, in batches of 1,000 docs in JSON files. My setup consists of 2 shards, 1 replica each, 50m docs in total. These updates are hitting a node at random, from a server across the Internet. Apart from the obvious delay, I'm also seeing QTime's of 1,00

Re: Solr QTime explanation

2016-01-26 Thread Damien Picard
gt; This query mostly returns a QTime=4 and it takes around 20ms on the > client > > side to get the result. > > > We have to handle around 200 simultaneous connections. > > You are probably experiencing congestion. JMeter can visualize throughput. > Try experimenting with

Re: Solr QTime explanation

2016-01-19 Thread Toke Eskildsen
Damien Picard wrote: > Currently we have 4 Solr nodes, with 12Gb memory (heap) ; the collections > are replicated (4 shards, 1 replica). > This query mostly returns a QTime=4 and it takes around 20ms on the client > side to get the result. > We have to handle around 200 simultane

Re: Solr QTime explanation

2016-01-19 Thread Damien Picard
Thank you for your advices. Currently we have 4 Solr nodes, with 12Gb memory (heap) ; the collections are replicated (4 shards, 1 replica). This query mostly returns a QTime=4 and it takes around 20ms on the client side to get the result. We have to handle around 200 simultaneous connections

Re: Solr QTime explanation

2016-01-19 Thread Shawn Heisey
> > 0.0 > > > 0.0 > > > 0.0 > > > 0.0 > > > > 0.0 > > 0.0 > > > 0.0 > > >

Solr QTime explanation

2016-01-19 Thread Damien Picard
0.0 0.0 I see that the QTime is "3003", but I get nothing (0.0) for all other times. Do you know what does it means ? Thank you in advance. -- Damien Picard Expert GWT <http://www.editions-eni.fr/livres/gwt-google-web-toolkit-developpez-des-app

Re: &fq degrades qtime in a 20million doc collection

2016-01-16 Thread Toke Eskildsen
Anria B. wrote: > Schema investigations : the &fq are frequently on Multivalued string f> ields, and we believe that it may also be slowing down the &Fq even more, > but we were wondering why. When we run &fq on single valued fields its > faster than the multi-valued fields, even when the multi

Re: &fq degrades qtime in a 20million doc collection

2016-01-15 Thread Anria B.
n for everybody's help and pointers and hints, you kept us busy with changing our mindset on a lot of things here. Regards Anria -- View this message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4251212.html Sent from the Solr - U

Re: &fq degrades qtime in a 20million doc collection

2016-01-15 Thread Yonik Seeley
On Wed, Jan 13, 2016 at 7:01 PM, Shawn Heisey wrote: [...] >> 2. q=*&fq=someField:SomeVal ---> takes 2.5 seconds >> 3.q=someField:SomeVal --> 300ms [...] >> >> have any of you encountered such a thing? >> that FQ degrades query time by so much? > A value of * for your query will be sl

Re: &fq degrades qtime in a 20million doc collection

2016-01-15 Thread Toke Eskildsen
Anria B. wrote: > Thanks Toke for this. It gave us a ton to think about, and it really helps > supporting the notion of several smaller indexes over one very large one,> > where we can rather distribute a few JVM processes with less size each, than > have one massive one that is according to this

Re: &fq degrades qtime in a 20million doc collection

2016-01-15 Thread Anria B.
, State and University Library, Denmark -- View this message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4251176.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Anria B.
message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250855.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Shawn Heisey
On 1/14/2016 12:07 PM, Anria B. wrote: > Here are some Actual examples, if it helps > > wt=json&q=*:*&indent=on&fq=SolrDocumentType:"invalidValue"&fl=timestamp&rows=0&start=0&debug=timing > "QTime": 590, > Now

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Shawn Heisey
On 1/14/2016 1:01 PM, Anria B. wrote: > Here is a stacktrace of when we put a &fq in the autowarming, or in the > "newSearcher" to warm up the collection after a commit. > org.apache.solr.core.SolrCore - org.apache.solr.common.SolrException: Error > opening new searcher. exceeded limit of maxWarmi

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Anria B.
er.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) -- View this message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collecti

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Anria B.
Here are some Actual examples, if it helps wt=json&q=*:*&indent=on&fq=SolrDocumentType:"invalidValue"&fl=timestamp&rows=0&start=0&debug=timing { "responseHeader": { "status": 0, "QTime": 590,

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Anria B.
ill dial it down on the Heap size Anria -- View this message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250798.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Jack Krupansky
teness: How large is the collection in bytes? > > > >> 2. q=*&fq=someField:SomeVal ---> takes 2.5 seconds > >> 3.q=someField:SomeVal --> 300ms > >> 4. as numFound -> infinity, qtime -> infinity. > > > > What are you doing beside

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Erick Erickson
the collection in bytes? > >> 2. q=*&fq=someField:SomeVal ---> takes 2.5 seconds >> 3.q=someField:SomeVal --> 300ms >> 4. as numFound -> infinity, qtime -> infinity. > > What are you doing besides the q + fq above? Assuming a modest size >

Re: &fq degrades qtime in a 20million doc collection

2016-01-14 Thread Toke Eskildsen
> 4. as numFound -> infinity, qtime -> infinity. What are you doing besides the q + fq above? Assuming a modest size index (let's say < 100GB), even 300ms for a simple key:value query is a long time. Usual culprits for performance problems when result set size grows are high va

Re: &fq degrades qtime in a 20million doc collection

2016-01-13 Thread Jack Krupansky
ou add &debug=true to your query, one of > the returned sections > will be "timings" for the various components of a query measured in > milliseconds. Occasionally > there will be surprises in there. > > What are you measuring when you say it takes seconds? The time

Re: &fq degrades qtime in a 20million doc collection

2016-01-13 Thread Erick Erickson
nder the result page or are you looking at the QTime parameter of the return packet? Best, Erick On Wed, Jan 13, 2016 at 4:27 PM, Anria B. wrote: > hi Shawn > > Thanks for the quick answer. As for the q=*, we also saw similar results > in our testing when doing things like > >

Re: &fq degrades qtime in a 20million doc collection

2016-01-13 Thread Anria B.
ry that! GC we've had default and G1 setups. Thanks for giving us something to think about Anria -- View this message in context: http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250600.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: &fq degrades qtime in a 20million doc collection

2016-01-13 Thread Shawn Heisey
, what we are seeing goes against all intuition I've built up in the Solr > world > > 1. Collection has 20-30 million docs. > 2. q=*&fq=someField:SomeVal ---> takes 2.5 seconds > 3.q=someField:SomeVal --> 300ms > 4. as numFound -> infinity, qt

&fq degrades qtime in a 20million doc collection

2016-01-13 Thread Anria B.
x27;ve built up in the Solr world 1. Collection has 20-30 million docs. 2. q=*&fq=someField:SomeVal ---> takes 2.5 seconds 3.q=someField:SomeVal --> 300ms 4. as numFound -> infinity, qtime -> infinity. have any of you encountered such a thing? that FQ degrades

Re: Log numfound, qtime, ...

2015-03-04 Thread bengates
Hello everyone, I'll check this ASAP. Thanks for all your answers ! Ben -- View this message in context: http://lucene.472066.n3.nabble.com/Log-numfound-qtime-tp4189561p4191129.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Log numfound, qtime, ...

2015-03-04 Thread Chris Hostetter
: Here's my need : I'd like to log Solr Responses so as to achieve some : business statistics. : I'd like to report, as a daily/weekly/yearly/whateverly basis, the following : KPIs : ... : I think I'll soon get into performance issues, as you guess. : Do you know a better approach ? All of this

RE: Log numfound, qtime, ...

2015-03-04 Thread Markus Jelsma
Hello - This patch may be more straightforward https://issues.apache.org/jira/browse/SOLR-4018 -Original message- > From:Ahmed Adel > Sent: Wednesday 4th March 2015 19:39 > To: solr-user@lucene.apache.org > Subject: Re: Log numfound, qtime, ... > > Hi, I believe

Re: Log numfound, qtime, ...

2015-03-04 Thread Ahmed Adel
Hi, I believe a better approach than Solarium is to create a custom search component that extends SearchComponent class and override process() method to store query, QTime, and numFound to a database for further analysis. This approach would cut steps 2 through 6 into one step. Analysis can be

Re: Log numfound, qtime, ...

2015-02-27 Thread Mikhail Khludnev
ds it to a PHP app > 3. The PHP app loads the Solarium library and forwards the request to > Solr/Jetty > 4. Solr replies a JSON and Solarium turns it into a PHP Solarium Response > Object > 5. The PHP app sends the user the raw JSON through NGINX (as if it were > Jetty) > 6.

Log numfound, qtime, ...

2015-02-27 Thread bengates
r the raw JSON through NGINX (as if it were Jetty) 6. The PHP app stores the query, the QTime and the numfound in a database I think I'll soon get into performance issues, as you guess. Do you know a better approach ? Thanks, Ben -- View this message in context: http://lucene.472066.n3.nabb

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-20 Thread Harald Kirsch
-- View this message in context: http://lucene.472066.n3.nabble.com/Solr-irregularly-having-QTime-5ms-stracing-solr-cures-the-problem-tp4146047p4147512.html Sent from the Solr - User mailing list archive at Nabble.com. -- Harald Kirsch Raytion GmbH Kaiser-Friedrich-Ring 74 40547 Duesseldo

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-16 Thread IJ
a latency due to a DNS Name Lookup delay. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-irregularly-having-QTime-5ms-stracing-solr-cures-the-problem-tp4146047p4147512.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-14 Thread Harald Kirsch
t level is worth a month of guessing and poking. On Jul 8, 2014, at 3:53 AM, Harald Kirsch wrote: Hi all, This is what happens when I run a regular wget query to log the current number of documents indexed: 2014-07-08:07:23:28 QTime=20 numFound="5720168" 2014-07-08:07:24:28 QTime=12 num

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-14 Thread Harald Kirsch
-td4143681.html was resolved by adding an explicit host mapping entry on /etc/hosts for inter node solr communication - thereby bypassing DNS Lookups. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-irregularly-having-QTime-5ms-stracing-solr-cures-the-problem

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-12 Thread IJ
this message in context: http://lucene.472066.n3.nabble.com/Solr-irregularly-having-QTime-5ms-stracing-solr-cures-the-problem-tp4146047p4146858.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-09 Thread Harald Kirsch
d way. Knowing exactly what's happening at the transport level is worth a month of guessing and poking. On Jul 8, 2014, at 3:53 AM, Harald Kirsch wrote: Hi all, This is what happens when I run a regular wget query to log the current number of documents indexed: 2014-07-08:07:23:2

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Steve McKay
> > This is what happens when I run a regular wget query to log the current > number of documents indexed: > > 2014-07-08:07:23:28 QTime=20 numFound="5720168" > 2014-07-08:07:24:28 QTime=12 numFound="5721126" > 2014-07-08:07:25:28 QTime=19 numFound="5

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Walter Underwood
Local disks or shared network disks? --wunder On Jul 8, 2014, at 11:43 AM, Shawn Heisey wrote: > On 7/8/2014 1:53 AM, Harald Kirsch wrote: >> Hi all, >> >> This is what happens when I run a regular wget query to log the >> current number of documents indexed: >&

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Shawn Heisey
On 7/8/2014 1:53 AM, Harald Kirsch wrote: > Hi all, > > This is what happens when I run a regular wget query to log the > current number of documents indexed: > > 2014-07-08:07:23:28 QTime=20 numFound="5720168" > 2014-07-08:07:24:28 QTime=12 numFound="572

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Harald Kirsch
, "Harald Kirsch" wrote: Hi all, This is what happens when I run a regular wget query to log the current number of documents indexed: 2014-07-08:07:23:28 QTime=20 numFound="5720168" 2014-07-08:07:24:28 QTime=12 numFound="5721126" 2014-07-08:07:25:28 QTime=19 numFo

Re: Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Heyde, Ralf
xed: > > 2014-07-08:07:23:28 QTime=20 numFound="5720168" > 2014-07-08:07:24:28 QTime=12 numFound="5721126" > 2014-07-08:07:25:28 QTime=19 numFound="5721126" > 2014-07-08:07:27:18 QTime=50071 numFound="5721126" > 2014-07-08:07:29:08 QTime=5005

Solr irregularly having QTime > 50000ms, stracing solr cures the problem

2014-07-08 Thread Harald Kirsch
Hi all, This is what happens when I run a regular wget query to log the current number of documents indexed: 2014-07-08:07:23:28 QTime=20 numFound="5720168" 2014-07-08:07:24:28 QTime=12 numFound="5721126" 2014-07-08:07:25:28 QTime=19 numFound="5721126" 2014-07-08

Re: Getting huge difference in QTime for terms.lower and terms.prefix

2014-04-10 Thread Jilani Shaik
Please provide suggestions what could be the reason for this. Thanks, On Thu, Apr 10, 2014 at 2:54 PM, Jilani Shaik wrote: > Hi, > > When I queried terms component with a "terms.prefix" the QTime for it is > <100 milli seconds, where as the same query I am giving with

Getting huge difference in QTime for terms.lower and terms.prefix

2014-04-10 Thread Jilani Shaik
Hi, When I queried terms component with a "terms.prefix" the QTime for it is <100 milli seconds, where as the same query I am giving with "terms.lower" then the QTime is > 500 milliseconds. I am using the Solr Cloud. I am giving both the cases terms.limit as 60 and

Re: Regarding reducing qtime

2013-09-07 Thread Furkan KAMACI
s consumed > by solr server (QTime). I used the jconsole mointor tool, The report are >heap usage of 10-50Mb, >No of threads - 10-20 >No of class around 3800, >

Regarding reducing qtime

2013-09-06 Thread prabu palanisamy
Hi I am currently using solr -3.5.0 indexed by wikipedia dump (50 gb) with java 1.6. I am searching the tweets in the solr. Currently it takes average of 210 millisecond for each post, out of which 200 millisecond is consumed by solr server (QTime). I used the jconsole mointor tool, The report

RE: Huge discrepancy between QTime and ElapsedTime

2013-08-14 Thread Jean-Sebastien Vachon
Thanks Shawn and Scott for your feedback. It is really appreciated. > -Original Message- > From: Shawn Heisey [mailto:s...@elyograg.org] > Sent: August-14-13 12:39 PM > To: solr-user@lucene.apache.org > Subject: Re: Huge discrepancy between QTime and ElapsedTime > >

Re: Huge discrepancy between QTime and ElapsedTime

2013-08-14 Thread Yonik Seeley
On Wed, Aug 14, 2013 at 12:39 PM, Shawn Heisey wrote: > You also have grouping enabled. From what I understand, that can be slow. > If you turn that off, what happens to your elapsed times? QTime would include that. It includes everything up until the point where the response starts str

Re: Huge discrepancy between QTime and ElapsedTime

2013-08-14 Thread Shawn Heisey
On 8/14/2013 9:09 AM, Jean-Sebastien Vachon wrote: I am running some benchmarks to tune our Solr 4.3 cloud and noticed that while the reported QTime is quite satisfactory (100 ms or so), the elapsed time is quite large (around 5 seconds). The collection contains 12.8M documents and the index

Re: Huge discrepancy between QTime and ElapsedTime

2013-08-14 Thread Scott Lundgren
Jean-Sebastien, We have had similar issues. In our cases, our QTime varied between 100ms and as much as 120s (that's right, 120,000ms). The times were so long that they resulted in timeouts upstream. In our case, we have settled in on the following hypothesis: The actual retrieval time (

Huge discrepancy between QTime and ElapsedTime

2013-08-14 Thread Jean-Sebastien Vachon
Hi All, I am running some benchmarks to tune our Solr 4.3 cloud and noticed that while the reported QTime is quite satisfactory (100 ms or so), the elapsed time is quite large (around 5 seconds). The collection contains 12.8M documents and the index size on disk is about 35 GB.. I have only

Re: solr qtime suddenly increased in production env

2013-08-05 Thread Shawn Heisey
On 8/5/2013 11:27 AM, adfel70 wrote: Thanks for your detailed answer. Some followup questions: 1. Are there any tests I can make to determine 100% that this is a "not enough RAM" scenario"? For heap size problems, turn on GC logging. Look at the log or run it through an analysis tool like G

Re: solr qtime suddenly increased in production env

2013-08-05 Thread adfel70
Each machine runs 2 solr processes, each process with 6gb memory to jvm. >> >> The cluster currently has 330 million documents, each process around 30gb >> of >> data. >> >> Until recently performance was fine, but after a recent indexing which >> added &

Re: solr qtime suddenly increased in production env

2013-08-05 Thread Shawn Heisey
million documents, each process around 30gb of data. Until recently performance was fine, but after a recent indexing which added arround 25 million docs, the search performance degraded dramatically. I'm now getting qtime of 30 second and sometimes even 60 seconds, for simple queries (fieldA:valu

solr qtime suddenly increased in production env

2013-08-05 Thread adfel70
of data. Until recently performance was fine, but after a recent indexing which added arround 25 million docs, the search performance degraded dramatically. I'm now getting qtime of 30 second and sometimes even 60 seconds, for simple queries (fieldA:value AND fieldB:value + facets + highlig

QTime > prepare+process

2013-06-11 Thread Strucken, Michael
Hi, I'm wondering why some of our queries take more than a second and started to debug with debugQuery=true. QTime is 2.028s Measured response time on client is 2.0566s But timing only sums up to 204ms: prepare org.apache.solr.handler.component.QueryCompone

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-24 Thread Dyer, James
lChecker : vastly varying spellcheck QTime times. One of our main concerns is the solr returns the best match based on what it thinks is the best. It uses Levenshtein's distance metrics to determine the best suggestions. Can we tune this to put more weightage on the number of frequency/hits vs

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-24 Thread SandeepM
this, suggestions would seem more relevant when corrected.Also, if we can do this while keeping maxCollation = 1 and maxCollationTries = "some reasonable number so that QTime does not go out of control" that will be great! Any insights into this would be great. Thanks for your

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-23 Thread Dyer, James
[mailto:skmi...@hotmail.com] Sent: Tuesday, April 23, 2013 2:13 PM To: solr-user@lucene.apache.org Subject: RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times. James, Is there a way to determine how many times the collations were tried? Is there a parameter that can be issued that can

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-23 Thread SandeepM
.472066.n3.nabble.com/DirectSolrSpellChecker-vastly-varying-spellcheck-QTime-times-tp4057176p4058400.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: spellcheck: change in behavior and QTime

2013-04-23 Thread SandeepM
I apologize for the length of the previous message. I do see a problem with spellcheck becoming faster (notice QTime). I also see an increase in the number of cache hits if spellcheck=false is run one time followed by the original spellcheck query. Seems like spellcheck=false alters the

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-22 Thread Dyer, James
ssage- From: SandeepM [mailto:skmi...@hotmail.com] Sent: Monday, April 22, 2013 4:04 PM To: solr-user@lucene.apache.org Subject: RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times. Chocolat Factry 0 77 1 0 8 615

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-22 Thread SandeepM
representation of Qtime. We use groupings in the search query. For Chocolate Factory, I get a search QTime of 198ms For Pursuit Happyness, I get a search QTime of 318ms Would appreciate your insights. Thanks. -- Sandeep -- View this message in context: http://lucene.472066.n3.nabble.com

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-22 Thread Dyer, James
Original Message- From: SandeepM [mailto:skmi...@hotmail.com] Sent: Monday, April 22, 2013 2:02 PM To: solr-user@lucene.apache.org Subject: RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times. James, Thanks. That was very helpful. That helped me understand count and alternative

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-22 Thread SandeepM
p;wt=xml&rows=10&version=2.2&echoParams=explicit In this case, the intent is to correct "chocolat factry" with "chocolate factory" which exists in my spell field index. I see a QTime from the above query as somewhere between 350-400ms I run a similar query rep

spellcheck: change in behavior and QTime

2013-04-22 Thread SandeepM
I am using the same setup (solrconfig.xml and schema.xml) as stated in my prior message: http://lucene.472066.n3.nabble.com/DirectSolrSpellChecker-vastly-varying-spellcheck-QTime-times-tt4057176.html#a4057389 I am using SOLR 4.2.1 . Just wanted to report something wierd that I am seeing and would

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-19 Thread Dyer, James
om] Sent: Friday, April 19, 2013 12:48 PM To: solr-user@lucene.apache.org Subject: RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times. James, Thanks for the reply. I see your point and sure enough, reducing maxCollationTries does reduce time, however may not produce results. It see

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-19 Thread SandeepM
same amount of time. My queryCaches are activated, however don't believe it gets used for spellchecks. Thanks. -- Sandeep -- View this message in context: http://lucene.472066.n3.nabble.com/DirectSolrSpellChecker-vastly-varying-spellcheck-QTime-times-tp4057176p4057389.html Sent from the

RE: DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-19 Thread Dyer, James
I guess the first thing I'd do is to set "maxCollationTries" to zero. This means it will only run your main query once and not re-run it to check the collations. Now see if your queries have consistent qtime. One easy explanation is that with "maxCollationTries=10"

DirectSolrSpellChecker : vastly varying spellcheck QTime times.

2013-04-18 Thread SandeepM
/select?q=&spellcheck.q=chocolat%20factry&spellcheck=true&df=spell&fl=&indent=on&wt=xml&rows=10&version=2.2&echoParams=explicit In this case, the intent is to correct "chocolat factry" with "chocolate factory" which exists in my spell fiel

Re: Slow qTime for distributed search

2013-04-12 Thread Furkan KAMACI
Manuel Le Normand, I am sorry but I want to learn something. You said you have 40 dedicated servers. What is your total document count, total document size, and total shard size? 2013/4/11 Manuel Le Normand > Hi, > We have different working hours, sorry for the reply delay. Your assumed > number

Re: Slow qTime for distributed search

2013-04-11 Thread Manuel Le Normand
Hi, We have different working hours, sorry for the reply delay. Your assumed numbers are right, about 25-30Kb per doc. giving a total of 15G per shard, there are two shards per server (+2 slaves that should do no work normally). An average query has about 30 conditions (OR AND mixed), most of them

Re: Slow qTime for distributed search

2013-04-09 Thread Shawn Heisey
On 4/9/2013 3:50 PM, Furkan KAMACI wrote: Hi Shawn; You say that: *... your documents are about 50KB each. That would translate to an index that's at least 25GB* I know we can not say an exact size but what is the approximately ratio of document size / index size according to your experiences

Re: Slow qTime for distributed search

2013-04-09 Thread Furkan KAMACI
queries (up to 30 conditions on different fields), 1 qps >> rate >> >> Sharding my index was done for two reasons, based on 2 servers (4shards) >> tests: >> >> 1. As index grew above few million of docs qTime raised greatly, while >> sharding the inde

Re: Slow qTime for distributed search

2013-04-09 Thread Shawn Heisey
rate Sharding my index was done for two reasons, based on 2 servers (4shards) tests: 1. As index grew above few million of docs qTime raised greatly, while sharding the index to smaller pieces (about 0.5M docs) gave way better results, so I bound every shard to have 0.5M docs. 2

Re: Slow qTime for distributed search

2013-04-09 Thread Manuel Le Normand
on 2 servers (4shards) tests: 1. As index grew above few million of docs qTime raised greatly, while sharding the index to smaller pieces (about 0.5M docs) gave way better results, so I bound every shard to have 0.5M docs. 2. Tests showed i was cpu-bounded during queries. As i have low

Re: Slow qTime for distributed search

2013-04-08 Thread Shawn Heisey
e queries into one result response. There is no reasonable way to predict when that will happen. Observations showed the following: 1. Total qTime for the same query set is 5 time higher in collection2 (150ms->700 ms) 2. Adding to colleciton2 the *shard.info=true* param in the que

Re: Slow qTime for distributed search

2013-04-08 Thread Manuel Le Normand
on 2 servers. Second I created "collection2" - 48 shards*replicationFactor=2 collection on 24 servers, keeping same config and same num of documents per shard. Observations showed the following: 1. Total qTime for the same query set is 5 time higher in collection2 (150ms->700 ms

Slow qTime for distributed search

2013-04-07 Thread Manuel Le Normand
Hello After performing a benchmark session on small scale i moved to a full scale on 16 quad core servers. Observations at small scale gave me excellent qTime (about 150 ms) with up to 2 servers, showing my searching thread was mainly cpu bounded. My query set is not faceted. Growing to full scale

Re: High QTime when wildcards in hl.fl are used

2013-03-08 Thread Karol Sikora
I've found more interesting informations about using fastVectorHighlighting combined with highlighted fields with wildcards after testing on isolated group of documents with text content. fvh + fulltext_*: QTime ~4s (!) fvh + fulltext_1234: QTime ~50ms no fvh + fulltext_*: QTime ~600ms n

  1   2   3   >