Re: Autoscaling and inactive shards

2018-06-18 Thread Andrzej Białecki
> On 18 Jun 2018, at 14:02, Jan Høydahl wrote: > > Is there still a valid reason to keep the inactive shards around? > If shard splitting is robust, could not the split operation delete the > inactive shard once the new shards are successfully loaded, just like what > happens during an automa

Re: Autoscaling and inactive shards

2018-06-18 Thread Jan Høydahl
Is there still a valid reason to keep the inactive shards around? If shard splitting is robust, could not the split operation delete the inactive shard once the new shards are successfully loaded, just like what happens during an automated merge of segments? -- Jan Høydahl, search solution archi

Re: Autoscaling and inactive shards

2018-06-18 Thread Andrzej Białecki
If I’m not mistaken the weird accounting of “inactive” shard cores is caused also by the fact that individual cores that constitute replicas in the inactive shard are still loaded, so they still affect the number of active cores. If that’s the case then we should probably fix this to prevent loa

Re: Autoscaling and inactive shards

2018-06-13 Thread Shalin Shekhar Mangar
Yes, I believe Noble is working on this. See https://issues.apache.org/jira/browse/SOLR-11985 On Wed, Jun 13, 2018 at 1:35 PM Jan Høydahl wrote: > Ok, get the meaning of preferences. > > Would there be a way to write a generic rule that would suggest moving > shards to obtain balance, without sp

Re: Autoscaling and inactive shards

2018-06-13 Thread Jan Høydahl
Ok, get the meaning of preferences. Would there be a way to write a generic rule that would suggest moving shards to obtain balance, without specifying absolute core counts? I.e. if you have three nodes A: 3 cores B: 5 cores C: 3 cores Then that rule would suggest two moves to end up with 4 cor

Re: Autoscaling and inactive shards

2018-06-11 Thread Shalin Shekhar Mangar
Hi Jan, Comments inline: On Tue, Jun 12, 2018 at 2:19 AM Jan Høydahl wrote: > Hi > > I'm trying to have Autoscaling move a shard to another node after manually > splitting. > We have two nodes, one has a shard1 and the other node is empty. > > After SPLITSHARD you have > > * shard1 (inactive) >