It would be a huge step forward if one could have several hundreds of Solr
collections, but only have a small portion of them opened/loaded at the
same time. This is similar to ElasticSearch's close index api, listed here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-open-close.html
. I've opened an issue to implement the same in Solr here a few months ago:
https://issues.apache.org/jira/browse/SOLR-6399

On Thu, Mar 5, 2015 at 4:42 PM, Damien Kamerman <dami...@gmail.com> wrote:

> I've tried a few variations, with 3 x ZK, 6 X nodes, solr 4.10.3, solr 5.0
> without any success and no real difference. There is a tipping point at
> around 3,000-4,000 cores (varies depending on hardware) from where I can
> restart the cloud OK within ~4min, to the cloud not working and
> continuous 'conflicting
> information about the leader of shard' warnings.
>
> On 5 March 2015 at 14:15, Shawn Heisey <apa...@elyograg.org> wrote:
>
> > On 3/4/2015 5:37 PM, Damien Kamerman wrote:
> > > I'm running on Solaris x86, I have plenty of memory and no real limits
> > > # plimit 15560
> > > 15560:  /opt1/jdk/bin/java -d64 -server -Xss512k -Xms32G -Xmx32G
> > > -XX:MaxMetasp
> > >    resource              current         maximum
> > >   time(seconds)         unlimited       unlimited
> > >   file(blocks)          unlimited       unlimited
> > >   data(kbytes)          unlimited       unlimited
> > >   stack(kbytes)         unlimited       unlimited
> > >   coredump(blocks)      unlimited       unlimited
> > >   nofiles(descriptors)  65536           65536
> > >   vmemory(kbytes)       unlimited       unlimited
> > >
> > > I've been testing with 3 nodes, and that seems OK up to around 3,000
> > cores
> > > total. I'm thinking of testing with more nodes.
> >
> > I have opened an issue for the problems I encountered while recreating a
> > config similar to yours, which I have been doing on Linux.
> >
> > https://issues.apache.org/jira/browse/SOLR-7191
> >
> > It's possible that the only thing the issue will lead to is improvements
> > in the documentation, but I'm hopeful that there will be code
> > improvements too.
> >
> > Thanks,
> > Shawn
> >
> >
>
>
> --
> Damien Kamerman
>

Reply via email to