On Mon, 2016-12-12 at 10:13 +0000, Alfonso Muñoz-Pomer Fuentes wrote:
> I’m writing because in our web application we’re using Solr 5.1.0
> and currently we’re hosting it on a VM with 32 GB of RAM (of which 30
> are dedicated to Solr and nothing else is running there).

This leaves very little memory for disk cache. I hope your underlying
storage is local SSDs and not spinning drives over the network.

>  We have four cores, that are this size:
> - 25.56 GB, Num Docs = 57,860,845
> - 12.09 GB, Num Docs = 173,491,631

Smallish in bytes, largish in document count.

> We aren’t indexing on this machine, and we’re getting OOM relatively 
> quickly (after about 14 hours of regular use).

The usual suspect for OOMs after some time is the filterCache. Worst-
case entries in that one takes up 1 bit/document, which means 7MB and
22MB respectively for the two collections above. If your filterCache is
set to 1000 for those, this means (7MB+22MB)*1000 ~= all your heap.


>  Right now we have a Cron job that restarts Solr every 12 hours, so
> it’s not pretty. We use faceting quite heavily

Hopefully on docValued fields?

>  and mostly as a document storage server (we want full data sets
> instead of the n most relevant results).

Hopefully with deep paging, as opposed to rows=173491631?

> I don’t know if what we’re experiencing is usual given the index size
> and memory constraint of the VM, or something looks like it’s wildly 
> misconfigured.

I would have guessed that your heap was quite large enough for a static
index, but that is just ... guesswork.

Would upgrading to Solr 6 make sense?

It would not hep in itself, but if you also switched to using streaming
for your assumedly large exports, it would lower memory requirements.

- Toke Eskildsen, State and University Library, Denmark

> 

Reply via email to