AFAICT, SolrCloud addresses the use case of distributed update for a
relatively smaller number of collections (dozens?) that have a relatively
larger number of rows - billions over a modest to moderate number of nodes
(a handful to a dozen or dozens). So, maybe dozens of collections (some
people still call these "cores") that distribute hundreds of millions if not
billions of rows over dozens (or potentially low hundreds) of nodes.
Technically, ZK was designed for thousands of nodes, but I don't think that
was for the use case of distributed query that constantly fans out to all
shards.
Aleksey: What would you say is the average core size for your use case -
thousands or millions of rows? And how sharded would each of your
collections be, if at all?
-- Jack Krupansky
-----Original Message-----
From: Noble Paul നോബിള് नोब्ळ्
Sent: Friday, June 07, 2013 6:38 AM
To: solr-user@lucene.apache.org
Subject: Re: LotsOfCores feature
The Wiki page was built not for Cloud Solr.
We have done such a deployment where less than a tenth of cores were active
at any given point in time. though there were tens of million indices they
were split among a large no:of hosts.
If you don't insist of Cloud deployment it is possible. I'm not sure if it
is possible with cloud
On Fri, Jun 7, 2013 at 12:38 AM, Aleksey <bitterc...@gmail.com> wrote:
I was looking at this wiki and linked issues:
http://wiki.apache.org/solr/LotsOfCores
they talk about a limit being 100K cores. Is that per server or per
entire fleet because zookeeper needs to manage that?
I was considering a use case where I have tens of millions of indices
but less that a million needs to be active at any time, so they need
to be loaded on demand and evicted when not used for a while.
Also since number one requirement is efficient loading of course I
assume I will store a prebuilt index somewhere so Solr will just
download it and strap it in, right?
The root issue is marked as "won;t fix" but some other important
subissues are marked as resolved. What's the overall status of the
effort?
Thank you in advance,
Aleksey
--
-----------------------------------------------------
Noble Paul