Hi Toke,

I don't think I answered your question properly.

With the current 1 core/customer setup many cores are idle. The redesign we
are working on will move most of our searches to being driven by SOLR vs
database (current split is 90% database, 10% solr). With that change, all
cores will see traffic.

We have 25G data in the index (across all cores) and they are currently in
a 2 core VM with 32G memory. We are making some changes to the schema and
the analyzers and we see the index size growing by 25% or so due to this.
And to support this we will be moving to a VM with 4 cores and 64G memory.
Hardware as such isn't a constraint.

Regards
Manoj

On Tue, Oct 7, 2014 at 8:47 AM, Toke Eskildsen <t...@statsbiblioteket.dk>
wrote:

> On Tue, 2014-10-07 at 14:27 +0200, Manoj Bharadwaj wrote:
> > My team inherited a SOLR setup with an architecture that has a core for
> > every customer. We have a few different types of cores, say "A", "B", C",
> > and for each one of this there is a core per customer - namely "A1",
> > "A2"..., "B1", "B2"... Overall we have over 600 cores. We don't know the
> > history behind the current design - the exact reasons why it was done the
> > way it was done - one probable consideration was to ensure a customer
> data
> > separate from other.
>
> It is not a bad reason. It ensures that ranked search is optimized
> towards each customer's data and makes it easy to manage adding and
> removing customers.
>
> > We want to go to a single core per type architecture, and move on to
> SOLR
> > cloud as well in near future to achieve sharding via the features cloud
> > provides.
>
> If the setup is heavy queried on most of the cores or is there are
> core-spanning searches, collapsing the user-specific cores into fewer
> super-cores might lower hardware requirements a bit. On the other hand,
> it most of the cores are idle most of the time, the 1 core/customer
> setup would be give better utilization of the hardware.
>
> Why do you want to collapse the cores?
>
> - Toke Eskildsen, State and University Library, Denmark
>
>
>

Reply via email to