On 2/12/2016 2:57 AM, Matteo Grolla wrote:
>      tell me if I'm wrong but qtime accounts for search time excluding the
> fetch of stored fields (I have a 90ms qtime and a ~30s time to obtain the
> results on the client on a LAN infrastructure for 300kB response). debug
> explains how much of qtime is used by each search component.
> For me 90ms are ok, I wouldn't spend time trying to make them 50ms, it's
> the ~30s to obtain the response that I'd like to tackle.
30 seconds to retreive data for a 300KB result indicates a *severe*
performance issue.

Stored fields in Lucene (and by extension, Solr) are compressed in
version 4.1 and later.  This means that they must be retrieved from disk
and then uncompressed before they can be sent back to clients.  Solr
does not offer any way to turn the compression off, but benchmarks have
shown that the overhead incurred by the compression and decompression on
a lightly loaded host is minimal.

If your system is running in a low memory situation, then the OS disk
cache may not be effective, which slows down data retrieval.  Also, if
available memory is low and the disks are extremely busy, then it may
take a very long time to retrieve the data from the disk.

If the CPUs on the system are extremely busy, then there may not be much
CPU time for the decompression.

A combination of low memory, extremely disk heavy I/O, and very busy
CPUs could potentially cause this kind of delay.  What can you tell us
about your index, your server, and how busythat server is?  If Solr is
running in a virtual machine, then the overall CPU, memory, and I/O load
on the physical host will be relevant.

Here's some general information about performance problems:

https://wiki.apache.org/solr/SolrPerformanceProblems

Thanks,
Shawn

Reply via email to