Thank you Erick - it was a mistake for this collection to be running in
schemaless mode; I will fix that, but right now the 'PROCESSOR_LOGS'
schema only has 10 fields. Another managed schema in the system has
over 1,000.
Shawn - I did see a post about setting vm.max_map_count higher (it was
One other red flag is you’re apparently running in “schemaless” mode, based on:
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:475)
When running in schemaless mode, if Solr encounters a fi
We’ve run into this fatal problem with 6.6 in prod. It gets overloaded, make
4000 threads, runs out of memory, and dies.
Not an acceptable design. Excess load MUST be rejected, otherwise the system
goes into a stable congested state.
I was working with John Nagle when he figured this out in the
My experience with "OutOfMemoryError: unable to create new native thread"
as follows: it occurs on envs where devs refuse to use threadpools in favor
of old good new Thread().
Then, it turns rather interesting: If there are plenty of heap, GC doesn't
sweep Thread instances. Since they are native i
On 12/9/2019 2:23 PM, Joe Obernberger wrote:
Getting this error on some of the nodes in a solr cloud during heavy
indexing:
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
Java was not able to start a new thread. Most likely this is caused by
the operating syste
Getting this error on some of the nodes in a solr cloud during heavy
indexing:
null:org.apache.solr.common.SolrException: Server error writing document id
COLLECT20005437492077_activemq:queue:PAXTwitterExtractionQueue to the index
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(D