cca x10 search performance improvement?
> >
> > Sorry for repeating your words, just trying to confirm and understand.
> >
> > Thanks,
> > Otis
> > --
> > Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch
> >
> >
> >
> >
-- Solr - Lucene - Nutch
>
>
>
> - Original Message
> > From: Raghuveer Kancherla
> > To: solr-user@lucene.apache.org
> > Sent: Thu, December 3, 2009 8:43:16 AM
> > Subject: Re: Retrieving large num of docs
> >
> > Hi Hoss,
> >
>
mber 3, 2009 8:43:16 AM
> Subject: Re: Retrieving large num of docs
>
> Hi Hoss,
>
> I was experimenting with various queries to solve this problem and in one
> such test I remember that requesting only the ID did not change the
> retrieval time. To be sure, I tested it again
Hi Hoss,
I was experimenting with various queries to solve this problem and in one
such test I remember that requesting only the ID did not change the
retrieval time. To be sure, I tested it again using the curl command today
and it confirms my previous observation.
Also, enableLazyFieldLoading s
: I think I solved the problem of retrieving 300 docs per request for now. The
: problem was that I was storing 2 moderately large multivalued text fields
: though I was not retrieving them during search time. I reindexed all my
: data without storing these fields. Now the response time (time for
Hi Hoss/Andrew,
I think I solved the problem of retrieving 300 docs per request for now. The
problem was that I was storing 2 moderately large multivalued text fields
though I was not retrieving them during search time. I reindexed all my
data without storing these fields. Now the response time (t
Thanks Hoss,
In my previous mail, I was measuring the system time difference between
sending a (http) request and receiving a response. This was being run on a
(different) client machine
Like you suggested, I tried to time the response on the server itself as
follows:
$ /usr/bin/time -p curl -sS
: I am using Solr1.4 for searching through half a million documents. The
: problem is, I want to retrieve nearly 200 documents for each search query.
: The query time in Solr logs is showing 0.02 seconds and I am fairly happy
: with that. However Solr is taking a long time (4 to 5 secs) to return
Hi Raghu
Let me describe our use case in more details. Probably that will clarify
things.
The usual use case for Lucene/Solr is retrieving of small portion of the
result set (10-20 documents). In our case we need to read the whole result
set and this creates huge load on Lucene index, meaning a l
Hi Andrew,
I applied the patch you suggested. I am not finding any significant changes
in the response times.
I am wondering if I forgot some important configuration setting etc.
Here is what I did:
1. Wrote a small program using solrj to use EmbeddedSolrServer (most of
the code is from the
> Hi Andrew,
> We are running solr using its http interface from python.
> From the resources
> I could find, EmbeddedSolrServer is possible only if I am
> using solr from a
> java program. It will be useful to understand if a
> significant part of the
> performance increase is due to bypassing HT
Hi Andrew,
We are running solr using its http interface from python. From the resources
I could find, EmbeddedSolrServer is possible only if I am using solr from a
java program. It will be useful to understand if a significant part of the
performance increase is due to bypassing HTTP before going
Hi
We obtain ALL documents for every query, the index size is about 50k. We use
number of stored fields. Often the result set size is several thousands of
docs.
We performed the following things to make it faster:
1. Use EmbeddedSolrServer
2. Patch Solr to avoid unnecessary marshalling while usi
Hi,
I am using Solr1.4 for searching through half a million documents. The
problem is, I want to retrieve nearly 200 documents for each search query.
The query time in Solr logs is showing 0.02 seconds and I am fairly happy
with that. However Solr is taking a long time (4 to 5 secs) to return the
r
14 matches
Mail list logo