Make them aware of what is required. Solr is not designed to return huge 
requests.

If you need to do this, you will need to run the JVM with a big enough heap to 
build the request. You are getting OOM because the JVM does not have enough 
memory to build a response with 100K documents.

wunder

On Jun 17, 2013, at 1:57 PM, Manuel Le Normand wrote:

> One of my users requested it, they are less aware of what's allowed and I
> don't want apriori blocking them for long specific request (there are other
> params that might end up OOMing me).
> 
> I thought of timeAllowed restriction, but also this solution cannot
> guarantee during this delay I would not get the JVM heap flooded (for
> example I already have all cashed and my RAM io's are very fast)
> 
> 
> On Mon, Jun 17, 2013 at 11:47 PM, Walter Underwood 
> <wun...@wunderwood.org>wrote:
> 
>> Don't request 100K docs in a single query. Fetch them in smaller batches.
>> 
>> wunder
>> 
>> On Jun 17, 2013, at 1:44 PM, Manuel Le Normand wrote:
>> 
>>> Hello again,
>>> 
>>> After a heavy query on my index (returning 100K docs in a single query)
>> my
>>> JVM heap's floods and I get an JAVA OOM exception, and then that my
>>> GCcannot collect anything (GC
>>> overhead limit exceeded) as these memory chunks are not disposable.
>>> 
>>> I want to afford queries like this, my concern is that this case
>> provokes a
>>> total Solr crash, returning a 503 Internal Server Error while trying to *
>>> index.*
>>> 
>>> Is there anyway to separate these two logics? I'm fine with solr not
>> being
>>> able to return any response after returning this OOM, but I don't see the
>>> justification the query to flood JVM's internal (bounded) buffers for
>>> writings.
>>> 
>>> Thanks,
>>> Manuel
>> 




Reply via email to