Hi Tomás,

Thank you for your email.
You said "have big caches or request big pages (e.g. 100k docs)"...
Does a fq cache all the potential results, or only the ones the query returns?
e.g.: select?q=*:*&fq=bPublic:true&rows=10

=> with this query, if I have 60 millions of public documents, would it cache 10 or 60 millions of IDs? ...and does it cache it the filter cache (from fq) in the OS cache or in java heap?

kr,
Bastien

On 04/05/2016 02:31, Tomás Fernández Löbbe wrote:
You could use some memory analyzer tools (e.g. jmap), that could give you a
hint. But if you are migrating, I'd start to see if you changed something
from the previous version, including jvm settings, schema/solrconfig.
If nothing is different, I'd try to identify which feature is consuming
more memory. If you use faceting/stats/suggester, or you have big caches or
request big pages (e.g. 100k docs) or use Solr Cell for extracting content,
those are some usual suspects. Try to narrow it down, it could be many
things. Turn on/off features as you look at the memory (you could use
something like jconsole/jvisualvm/jstat) and see when it spikes, compare
with the previous version. That's that I would do at least.

If you get to narrow it down to a specific feature, then you can come back
to the users list and ask with some more specifics, that way someone could
point you to the solution, or maybe file a JIRA if it turns out to be a bug.

Tomás

On Mon, May 2, 2016 at 11:34 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:

Hi Tomás,

Thanks for your answer.
How could I see what's using memory?
I tried to add "-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/solr/logs/OOM_Heap_dump/"
...but this doesn't seem to be really helpful...

Kind regards,
Bastien


On 02/05/2016 22:55, Tomás Fernández Löbbe wrote:

You could, but before that I'd try to see what's using your memory and see
if you can decrease that. Maybe identify why you are running OOM now and
not with your previous Solr version (assuming you weren't, and that you
are
running with the same JVM settings). A bigger heap usually means more work
to the GC and less memory available for the OS cache.

Tomás

On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:

Hi Guys,
I got several times the OOM script executed since I upgraded to Solr6.0:

$ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
Running OOM killer script for process 26044 for Solr on port 8983

Does it mean that I need to increase my JAVA Heap?
Or should I do anything else?

Here are some further logs:
$ cat solr_gc_log_20160502_0730:
}
{Heap before GC invocations=1674 (full 91):
   par new generation   total 1747648K, used 1747135K [0x00000005c0000000,
0x0000000640000000, 0x0000000640000000)
    eden space 1398144K, 100% used [0x00000005c0000000,
0x0000000615560000,
0x0000000615560000)
    from space 349504K,  99% used [0x0000000615560000, 0x000000062aa2fc30,
0x000000062aab0000)
    to   space 349504K,   0% used [0x000000062aab0000, 0x000000062aab0000,
0x0000000640000000)
   concurrent mark-sweep generation total 6291456K, used 6291455K
[0x0000000640000000, 0x00000007c0000000, 0x00000007c0000000)
   Metaspace       used 39845K, capacity 40346K, committed 40704K,
reserved
1085440K
    class space    used 4142K, capacity 4273K, committed 4368K, reserved
1048576K
2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
6291455K->6291456K(6291456K), 12.5694653 secs]
8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]


Kind regards,
Bastien



Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/

Reply via email to