Thanks. That's what I suspected. Yes, MegaMiniCores.
My scenario is purely hypothetical. But it is also relevant for
"multi-tenant" use cases, where the users and schemas are not known in
advance and are only online intermittently.
Users could fit three rough size categories: very small, medium, and very
large. Over time a user might move from very small to medium to very large.
Very large users could require their own dedicated clusters. Medium size
could occasionally require a dedicated node, but not always. And very small
is mostly offline but occasionally a fair number are online for short
periods of time.
-- Jack Krupansky
-----Original Message-----
From: Aleksey
Sent: Friday, June 07, 2013 3:44 PM
To: solr-user
Subject: Re: LotsOfCores feature
Aleksey: What would you say is the average core size for your use case -
thousands or millions of rows? And how sharded would each of your
collections be, if at all?
Average core/collection size wouldn't even be thousands, hundreds more
like. And the largest would be half a million or so but that's a
pathological case. I don't need sharding and queries than fan out to
different machines. If fact I'd like to avoid that so I don't have to
collate the results.
The Wiki page was built not for Cloud Solr.
We have done such a deployment where less than a tenth of cores were
active
at any given point in time. though there were tens of million indices they
were split among a large no:of hosts.
If you don't insist of Cloud deployment it is possible. I'm not sure if it
is possible with cloud
By Cloud you mean specifically SolrCloud? I don't have to have it if I
can do without it. Bottom line is I want a bunch of small cores to be
distributed over a fleet, each core completely fitting on one server.
Would you be willing to provide a little more details on your setup?
In particular, how are you managing the cores?
How do you route requests to proper server?
If you scale the fleet up and down, does reshuffling of the cores
happen automatically or is it an involved manual process?
Thanks,
Aleksey