Solrcloud does not come with any autoscaling functionality. If you want such a
thing, you’ll need to write it yourself.
https://github.com/whitepages/solrcloud_manager might be a useful head start
though, particularly the “fill” and “cleancollection” commands. I don’t do
*auto* scaling, but I d
Hi Paul, Thanks for the detail but I am still not able to understand how
the CoreAPI would make it easier for you to create replica's. I understand
that using Core API, you can add more cores but would that also populate
the data so that it can serve queries / act like a replica.
Second, As Shaw
These are excellent questions and give me a good sense of why you suggest using
the collections api.
In our case we have 8 shards of product data with a even distribution of data
per shard, no hot spots. We have very different load at different points in the
year (cyber monday), and we tend to
Hi Pual,
For Auto-scaling, it depends on how you are thinking to design and what/how
do you want to scale. Which scenario you think makes coreadmin API easy to
use for a sharded SolrCloud environment?
Isn't if in a sharded environment (assume 3 shards A,B & C) and shard B has
having higher or mo
Hi all,
This doesn’t really answer the following question:
What is the suggested way to add a new node to a collection via the
apis? I am specifically thinking of autoscale scenarios where a node has
gone down or more nodes are needed to handle load.
The coreadmin api makes this easy. The c
Hi Paul,
Shawn is referring to use Collections API
https://cwiki.apache.org/confluence/display/solr/Collections+API than Core
Admin API https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API
for SolrCloud.
Hope that clarifies and you mentioned about ADDREPLICA which is the
collections AP
Then what is the suggested way to add a new node to a collection via the apis?
I am specifically thinking of autoscale scenarios where a node has gone down
or more nodes are needed to handle load.
Note that the ADDREPLICA endpoint requires a shard name, which puts the onus of
how to scale ou
On 2/13/2016 6:01 PM, McCallick, Paul wrote:
> - When creating a new collection, SOLRCloud will use all available nodes for
> the collection, adding cores to each. This assumes that you do not specify a
> replicationFactor.
The number of nodes that will be used is numShards multipled by
replic
I’d like to verify the following:
- When creating a new collection, SOLRCloud will use all available nodes for
the collection, adding cores to each. This assumes that you do not specify a
replicationFactor.
- When adding new nodes to the cluster AFTER the collection is created, one
must use
Hi,
I found the email addresses from a slide-share @
http://www.slideshare.net/thelabdude/tjp-solr-webinar. It's very useful. We
are developing SOLR search using CDH4 Cloudera and embedded SOLR
4.4.0-search-1.1.0.
We created a Collection when the cluster had 2 slave nodes. Then two extra
nodes ad
i need to manually trigger this
> somehow ?
>
> Is there a better idea for having this sleeping replicas? I bet lots of
> people faced this problem, so a best practice must be out there.
>
>
>
> -
> Thanks,
> Michael
> --
> View this message in context:
>
this sleeping replicas? I bet lots of
people faced this problem, so a best practice must be out there.
-
Thanks,
Michael
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replication-after-re-adding-nodes-to-cluster-sleeping-replicas-tp4098764.html
Sent from the Solr - User ma
12 matches
Mail list logo