[ 
https://issues.apache.org/jira/browse/SOLR-15133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281872#comment-17281872
 ] 

Mike Drob commented on SOLR-15133:
----------------------------------

I think [https://shipilev.net/jvm/anatomy-quarks/2-transparent-huge-pages/] is 
a better explainer of what is going on, including the difference between 
{{UseLargePages}} and {{UseTransparentHugePages}}.

Keeping this enabled for heaps beyond 1G (which is most Solr heaps IME), 
appears to be beneficial, when the system supports it.

Now for the wrinkle... I believe both hugetlbfs and THP are reliant on kernel 
settings/parameters, and docker images don't have kernels themselves. MacOS 
doesn't support Large Pages 
([https://bugs.openjdk.java.net/browse/JDK-8233062)] which suggests that 
processes running in Docker for Mac wouldn't either. I don't know if this holds 
true for Windows/Linux as well, or if the docker engines there are able to 
delegate that memory management request.

> Document how to eliminate Failed to reserve shared memory warning
> -----------------------------------------------------------------
>
>                 Key: SOLR-15133
>                 URL: https://issues.apache.org/jira/browse/SOLR-15133
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: Docker, documentation
>    Affects Versions: 8.7
>            Reporter: David Eric Pugh
>            Assignee: David Eric Pugh
>            Priority: Minor
>             Fix For: master (9.0)
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> inspired by a conversation on 
> [https://github.com/docker-solr/docker-solr/issues/273,] it would be good to 
> document how to get rid of shared memory warning in Docker setups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to