Not 100% sure but I think Java may set some default sizes based on amount of memory, you can also try *lower* values in JVM flags.

--
 Sent from a phone, apologies for poor formatting.

On 14 June 2022 08:49:52 Omar Polo <o...@openbsd.org> wrote:

Yifei Zhan <openbsd@zhan.science> wrote:
On 22/05/26 11:20AM, Omar Polo wrote:
Hello ports,

now that we have an updated jna, here's the port that I was working on:

% pkg_info opensearch
Information for inst:opensearch-1.3.2

I tested this on my amd64 box, the package can be built, but I'm unable to start it:

# doas -u _opensearch /usr/local/opensearch/bin/opensearch

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for Failed to commit area from 0x00000000c0000000 to 0x0000000100000000 of length 1073741824.
# An error report file with more information is saved as:
# /var/log/opensearch/hs_err_pid11809.log
error:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='ENOMEM' (errno=12) at org.opensearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:139) at org.opensearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:101)
at org.opensearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:72)
at org.opensearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:152) at org.opensearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:110)

this is interesting.  Just for curiosity, could you test with
elasticsearch too?

(full log file was sent off-list)

unfortunately I can't really understand what it goes wrong.

# Native memory allocation (mmap) failed to map 1073741824 bytes for
Failed to commit area from 0x00000000c0000000 to 0x0000000100000000 of
length 1073741824.

it fails to allocate one gigs of memory, that's all I can see.  Don't
know if it's just a coincidence or not, but by the way that's the value
of the maxheapsize mentioned in your crash report:

[Global flags]
size_t MaxHeapSize = 1073741824 {product} {command line}

it's a shot in the dark, but if you change the flags for the jvm so it
uses higher limits does it work then?  you could try running opensearch
from the command line and setting OPENSEARCH_JAVA_OPTS, for example (with
very large values just to test)

doas -u _opensearch env OPENSEARCH_JAVA_OPTS='-Xms8g -Xmx8G' \
/usr/local/opensearch/bin/opensearch

I've bumped the maxfiles, login class and allocated 22GB of RAM to this test box:

$ sysctl | grep files
kern.maxfiles=65535

$ vmstat
procs    memory       page                    disks    traps          cpu
r   s   avm     fre  flt  re  pi  po  fr  sr wd0 cd0  int   sys   cs us sy id
1  73  224M  19920M 5988   0   0   0   0   0 144   0  167  8965 1401  4  5 91

login.conf:

opensearch:\
:openfiles=65536:\
:tc=daemon:

(I've added a login file for it after some comments on the port, I still
have to send an updated version 'cause 2.0 was released and I have to
update it.  Note that even if I've added a login file I'm still running
without one)

I'm using a very similar configuration and all I can say is that it
works for me ^^"

 PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
5197 _opensea  10    0 1610M 1505M idle      fsleep    1:18  4.54% ...

I don't have lots of data loaded in, merely a few tens of megabytes of
documents or so, that's why the consume is so "modest".

also, in this case the openfile limits doesn't count.  you need a higher
openfiles when running in "production mode", for a simpler local/testing
setup you should be fine with the default limit (although it'll
complains in the logs.)

I also tried:

opensearch:\
:datasize-cur=infinity:\
:datasize-max=infinity:\
:openfiles-max=16384:\
:openfiles-cur=16384:\
:tc=daemon:

... but the errors remain

If I try to start it with rcctl, it will fail after a few seconds:

$ doas rcctl start opensearch; sleep 10; doas rcctl check opensearch;
opensearch(ok)
opensearch(failed)

Have I missed something?

Reply via email to