Hello,

One of our collections, that is heavy with tons of TokenFilters using large 
dictionaries, has a lot of trouble dealing with collection reload. I removed 
all custom plugins from solrconfig, dumbed the schema down and removed all 
custom filters and replaced a customized decompounder with Lucene's vanilla 
filter, and the problem still exists.

After collection reload a second SolrCore instance appears for each real core 
in use, each next reload causes the number of instances to grow. The dangling 
instances are eventually removed except for one or two. When working locally 
with for example two shards/one replica in one JVM, a single reload eats about 
500 MB for each reload.

How can we force Solr to remove those instances sooner? Forcing a GC won't do 
it so it seems Solr itself actively keeps some stale instances alive.

Many thanks,
Markus

Reply via email to