Thank you Jan for the reply.
I will try it out.

Best,
Mark.

On Mon, Aug 12, 2019 at 6:29 PM Jan Høydahl <jan....@cominvent.com> wrote:

> I have never used such settings, but you could check out
> https://lucene.apache.org/solr/guide/8_1/common-query-parameters.html#segmentterminateearly-parameter
> which will allow you to pre-sort the index so that any early termination
> will actually return the most relevant docs. This will probably be easier
> to setup once https://issues.apache.org/jira/browse/SOLR-13681 is done.
>
> According to that same page you will not be able to abort long-running
> faceting using timeAllowed, but there are other ways to optimize faceting,
> such as using jsonFacet, threaded execution etc.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 12. aug. 2019 kl. 23:10 skrev Mark Robinson <mark123lea...@gmail.com>:
>
> Hi Jan,
>
> Thanks for the reply.
> Our normal search times is within 650 ms.
> We were analyzing some queries and found that few of them were like 14675
> ms, 13767 ms etc...
> So was curious to see whether we have some way to restrict the query to
> not run beyond say 5s or some ideal timing  in SOLR even if it returns only
> partial results.
>
> That is how I came across the "timeAllowed" and wanted to check on it.
> Also was curious to know whether  "shardHandler"  could be used to work
> in those lines or it is meant for a totally different functionality.
>
> Thanks!
> Best,
> Mark
>
>
> On Sun, Aug 11, 2019 at 8:17 AM Jan Høydahl <jan....@cominvent.com> wrote:
>
>> What is the root use case you are trying to solve? What kind of solr
>> install is this and do you not have control over the clients or what is the
>> reason that users overload your servers?
>>
>> Normally you would scale the cluster to handle normal expected load
>> instead of trying to give users timeout exceptions. What kind of query
>> times do you experience that are above 1s and are these not important
>> enough to invest extra HW? Trying to understand the real reason behind your
>> questions.
>>
>> Jan Høydahl
>>
>> > 11. aug. 2019 kl. 11:43 skrev Mark Robinson <mark123lea...@gmail.com>:
>> >
>> > Hello,
>> > Could someone share their thoughts please or point to some link that
>> helps
>> > understand my above queries?
>> > In the Solr documentation I came across a few lines on timeAllowed and
>> > shardHandler, but if there was an example scenario for both it would
>> help
>> > understand them more thoroughly.
>> > Also curious to know different ways if any n SOLR to restrict/ limit a
>> time
>> > consuming query from processing for a long time.
>> >
>> > Thanks!
>> > Mark
>> >
>> > On Fri, Aug 9, 2019 at 2:15 PM Mark Robinson <mark123lea...@gmail.com>
>> > wrote:
>> >
>> >>
>> >> Hello,
>> >> I have the following questions please:-
>> >>
>> >> In solrconfig.xml I created a new "/selecttimeout" handler copying
>> >> "/select" handler and added the following to my new "/selecttimeout":-
>> >>  <shardHandler class="HttpShardHandlerFactory">
>> >>    <int name="socketTimeOut">10</int>
>> >>    <int name="connTimeOut">20</int>
>> >>  </shardHandler>
>> >>
>> >> 1.
>> >> Does the above mean that if I dont get a request once in 10ms on the
>> >> socket handling the /selecttimeout handler, that socket will be closed?
>> >>
>> >> 2.
>> >> Same with  connTimeOut? ie the connection  object remains live only if
>> at
>> >> least a connection request comes once in every 20 mS; if not the object
>> >> gets closed?
>> >>
>> >> Suppose a time consumeing query (say with lots of facets etc...), is
>> fired
>> >> against SOLR. How can I prevent Solr processing it for not more than
>> 1s?
>> >>
>> >> 3.
>> >> Is this achieved by setting timeAllowed=1000?  Or are there any other
>> ways
>> >> to do this in Solr?
>> >>
>> >> 4
>> >> For the same purpose to prevent heavy queries overloading SOLR, does
>> the
>> >> <shardHandler> above help in anyway or is it that shardHandler has
>> nothing
>> >> to restrict a query once fired against Solr?
>> >>
>> >>
>> >> Could someone pls share your views?
>> >>
>> >> Thanks!
>> >> Mark
>> >>
>>
>
>

Reply via email to