We’ve run into this fatal problem with 6.6 in prod. It gets overloaded, make 4000 threads, runs out of memory, and dies.
Not an acceptable design. Excess load MUST be rejected, otherwise the system goes into a stable congested state. I was working with John Nagle when he figured this out in the late 1980s. https://www.researchgate.net/publication/224734039_On_Packet_Switches_with_Infinite_Storage wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Dec 9, 2019, at 11:14 PM, Mikhail Khludnev <m...@apache.org> wrote: > > My experience with "OutOfMemoryError: unable to create new native thread" > as follows: it occurs on envs where devs refuse to use threadpools in favor > of old good new Thread(). > Then, it turns rather interesting: If there are plenty of heap, GC doesn't > sweep Thread instances. Since they are native in Java, every of them hold > some ram for native stack. That exceeds stack space at some point of time. > So, check how many thread JVM hold after this particular OOME occurs by > jstack; you can even force GC to release that native stack space. Then, > rewrite the app, or reduce heap to enforce GC. > > On Tue, Dec 10, 2019 at 9:44 AM Shawn Heisey <apa...@elyograg.org> wrote: > >> On 12/9/2019 2:23 PM, Joe Obernberger wrote: >>> Getting this error on some of the nodes in a solr cloud during heavy >>> indexing: >> >> <snip> >> >>> Caused by: java.lang.OutOfMemoryError: unable to create new native thread >> >> Java was not able to start a new thread. Most likely this is caused by >> the operating system imposing limits on the number of processes or >> threads that a user is allowed to start. >> >> On Linux, the default limit is usually 1024 processes. It doesn't take >> much for a Solr install to need more threads than that. >> >> How to increase the limit will depend on what OS you're running on. >> Typically on Linux, this is controlled by /etc/security/limits.conf. If >> you're not on Linux, then you'll need to research how to increase the >> process limit. >> >> As long as you're fiddling with limits, you'll probably also want to >> increase the open file limit. >> >> Thanks, >> Shawn >> > > > -- > Sincerely yours > Mikhail Khludnev