On Tue, Dec 3, 2013 at 3:20 PM, Erick Erickson <erickerick...@gmail.com>wrote:
> You probably want to look at "transient cores", see: > http://wiki.apache.org/solr/LotsOfCores > > But millions will be "interesting" for a single node, you must have some > kind of partitioning in mind? > > Wow. Thanks for that great link. Yes we are sharding so its not like there would be millions of cores on one machine or even cluster. And since the cores are one per user, this is a totally clean approach. But still we want to make sure that we are not overloading the machine. Do you have any sense of what a good upper limit might be, or how we might figure that out? > Best, > Erick > > > On Tue, Dec 3, 2013 at 2:38 PM, hank williams <hank...@gmail.com> wrote: > > > We are building a system where there is a core for every user. There > will > > be many tens or perhaps ultimately hundreds of thousands or millions of > > users. We do not need each of those users to have “warm” data in memory. > In > > fact doing so would consume lots of memory unnecessarily, for users that > > might not have logged in in a long time. > > > > So my question is, is the default behavior of Solr to try to keep all of > > our cores warm, and if so, can we stop it? Also given the number of cores > > that we will likely have is there anything else we should be keeping in > > mind to maximize performance and minimize memory usage? > > > -- blog: whydoeseverythingsuck.com