I'm running on Solaris x86, I have plenty of memory and no real limits
# plimit 15560
15560:  /opt1/jdk/bin/java -d64 -server -Xss512k -Xms32G -Xmx32G
-XX:MaxMetasp
   resource              current         maximum
  time(seconds)         unlimited       unlimited
  file(blocks)          unlimited       unlimited
  data(kbytes)          unlimited       unlimited
  stack(kbytes)         unlimited       unlimited
  coredump(blocks)      unlimited       unlimited
  nofiles(descriptors)  65536           65536
  vmemory(kbytes)       unlimited       unlimited

I've been testing with 3 nodes, and that seems OK up to around 3,000 cores
total. I'm thinking of testing with more nodes.


On 5 March 2015 at 05:28, Shawn Heisey <apa...@elyograg.org> wrote:

> On 3/4/2015 2:09 AM, Shawn Heisey wrote:
> > I've come to one major conclusion about this whole thing, even before
> > I reach the magic number of 4000 collections. Thousands of collections
> > is not at all practical with SolrCloud currently.
>
> I've now encountered a new problem.  I may have been hasty in declaring
> that an increase of jute.maxbuffer is not required.  There are now 3715
> collections, and I've seen a zookeeper exception that may indicate an
> increase actually is required.  I have added that parameter to the
> startup and when I have some time to look deeper, I will see whether
> that helps.
>
> Before 5.0, the maxbuffer would have been exceeded by only a few hundred
> collections ... so this is definitely progress.
>
> Thanks,
> Shawn
>
>


-- 
Damien Kamerman

Reply via email to