Sounds just like it, i will check it out!

Thanks both!
Markus

 
 
-----Original message-----
> From:Erick Erickson <erickerick...@gmail.com>
> Sent: Wednesday 2nd May 2018 17:21
> To: solr-user <solr-user@lucene.apache.org>
> Subject: Re: Collection reload leaves dangling SolrCore instances
> 
> Markus:
> 
> You may well be hitting SOLR-11882.
> 
> On Wed, May 2, 2018 at 8:18 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> > On 5/2/2018 4:40 AM, Markus Jelsma wrote:
> >> One of our collections, that is heavy with tons of TokenFilters using 
> >> large dictionaries, has a lot of trouble dealing with collection reload. I 
> >> removed all custom plugins from solrconfig, dumbed the schema down and 
> >> removed all custom filters and replaced a customized decompounder with 
> >> Lucene's vanilla filter, and the problem still exists.
> >>
> >> After collection reload a second SolrCore instance appears for each real 
> >> core in use, each next reload causes the number of instances to grow. The 
> >> dangling instances are eventually removed except for one or two. When 
> >> working locally with for example two shards/one replica in one JVM, a 
> >> single reload eats about 500 MB for each reload.
> >>
> >> How can we force Solr to remove those instances sooner? Forcing a GC won't 
> >> do it so it seems Solr itself actively keeps some stale instances alive.
> >
> > Custom plugins, which you did mention, would be the most likely
> > culprit.  Those sometimes have bugs where they don't properly close
> > resources.  Are you absolutely sure that there is no custom software
> > loading at all?  Removing the jars entirely (not just the config that
> > might use the jars) might be required.
> >
> > Have you been able to get heap dumps and figure out what object is
> > keeping the SolrCore alive?
> >
> > Thanks,
> > Shawn
> >
> 

Reply via email to