I'm curious... how deep is it that is becoming problematic? Tens of pages,
hundreds, thousands, millions?
And when you say deep paging, are you incrementing through all pages down to
the depth or "gapping" to some very large depth outright? If the former, I
am wondering if the Solr cache is building up with all those previous
results.
And is it that the time is simply moderately beyond expectations (e.g. 10 or
30 seconds or a minute compared to 1 second), or... are we talking about a
situation where a core is terminally "thrashing" with garbage collection/OOM
issues?
-- Jack Krupansky
-----Original Message-----
From: arin_g
Sent: Tuesday, June 05, 2012 1:34 AM
To: solr-user@lucene.apache.org
Subject: Search timeout for Solrcloud
Hi,
We use solrcloud in production, and we are facing some issues with queries
that take very long specially deep paging queries, these queries keep our
servers very busy. i am looking for a way to stop (kill) queries taking
longer than a specific amount of time (say 5 seconds), i checked timeAllowed
but it doesn't work (again query runs completely). Also i noticed that
there are connTimeout and socketTimeout for distributed searches, but i am
not sure if they kill the thread (i want to save resources by killing the
query, not just returning a timeout). Also, if i could get partial results
that would be ideal. Any suggestions?
Thanks,
arin
--
View this message in context:
http://lucene.472066.n3.nabble.com/Search-timeout-for-Solrcloud-tp3987716.html
Sent from the Solr - User mailing list archive at Nabble.com.