There is a java cmd line arg that lets you run a command on OOM - I'd configure 
it to log and kill -9 Solr. Then use runit or something to supervice Solr - so 
that if it's killed, it just restarts.

I think that is the best way to deal with OOM's. Other than that, you have to 
write a middle layer and put limits on user requests before making Solr 
requests.

- Mark

On Jun 17, 2013, at 4:44 PM, Manuel Le Normand <manuel.lenorm...@gmail.com> 
wrote:

> Hello again,
> 
> After a heavy query on my index (returning 100K docs in a single query) my
> JVM heap's floods and I get an JAVA OOM exception, and then that my
> GCcannot collect anything (GC
> overhead limit exceeded) as these memory chunks are not disposable.
> 
> I want to afford queries like this, my concern is that this case provokes a
> total Solr crash, returning a 503 Internal Server Error while trying to *
> index.*
> 
> Is there anyway to separate these two logics? I'm fine with solr not being
> able to return any response after returning this OOM, but I don't see the
> justification the query to flood JVM's internal (bounded) buffers for
> writings.
> 
> Thanks,
> Manuel

Reply via email to