8.11.3 release

2023-08-01 Thread Ishan Chattopadhyaya
Hi all,
There have been lots of bug fixes that have gone into 9x that should
benefit all 8x users as well.
I thought of volunteering for such a 8.x release based on this comment [0].

Unless someone has any objections or concerns, can we tentatively plan 1st
September 2023 (1 month from now) as a tentative release for 8.11.3? I
think we will get ample time to backport relevant fixes to 8x by then.

Best regards,
Ishan

[0] -
https://issues.apache.org/jira/browse/SOLR-16777?focusedCommentId=17742854&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17742854


Re: 8.11.3 release

2023-08-01 Thread Uwe Schindler
Maybe ask on Lucene list, too, if there are some bug people like to have 
fixed in Lucene.


Uwe

Am 01.08.2023 um 11:10 schrieb Ishan Chattopadhyaya:

Hi all,
There have been lots of bug fixes that have gone into 9x that should
benefit all 8x users as well.
I thought of volunteering for such a 8.x release based on this comment [0].

Unless someone has any objections or concerns, can we tentatively plan 1st
September 2023 (1 month from now) as a tentative release for 8.11.3? I
think we will get ample time to backport relevant fixes to 8x by then.

Best regards,
Ishan

[0] -
https://issues.apache.org/jira/browse/SOLR-16777?focusedCommentId=17742854&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17742854


--
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de


-
To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
For additional commands, e-mail: dev-h...@solr.apache.org



Re: 8.11.3 release

2023-08-01 Thread Ishan Chattopadhyaya
Oh yes, good idea. Forgot about the split!

+Lucene Dev 

On Tue, 1 Aug, 2023, 6:17 pm Uwe Schindler,  wrote:

> Maybe ask on Lucene list, too, if there are some bug people like to have
> fixed in Lucene.
>
> Uwe
>
> Am 01.08.2023 um 11:10 schrieb Ishan Chattopadhyaya:
> > Hi all,
> > There have been lots of bug fixes that have gone into 9x that should
> > benefit all 8x users as well.
> > I thought of volunteering for such a 8.x release based on this comment
> [0].
> >
> > Unless someone has any objections or concerns, can we tentatively plan
> 1st
> > September 2023 (1 month from now) as a tentative release for 8.11.3? I
> > think we will get ample time to backport relevant fixes to 8x by then.
> >
> > Best regards,
> > Ishan
> >
> > [0] -
> >
> https://issues.apache.org/jira/browse/SOLR-16777?focusedCommentId=17742854&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17742854
> >
> --
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
> For additional commands, e-mail: dev-h...@solr.apache.org
>
>


Re: Deprecate Solr's "Ping" API?

2023-08-01 Thread Jason Gerlowski
> At one time I had an install that used the ping handler to tell haproxy
> which replicas (not using cloud OR /replication) were down either
> because of problems or because I wanted to explicitly take it out of
> rotation.  It worked really well for that.

Curious - did you move away from this for a different approach?  Or
did the deployment itself change and make the /ping+haproxy
combination unnecessary?

I'm likely missing something, but that use case seems like something
that's already covered reasonably well by the "LoadBalancing" line of
SolrClients.  LB clients keep a "dynamic" record of server health
(based on previous requests/responses), and also allow users to
manually add/remove servers from rotation.  So at a glance at least it
covers the key pieces of the /ping+haproxy use case?

Jason

On Tue, Jun 27, 2023 at 10:52 PM Shawn Heisey  wrote:
>
> On 6/27/23 15:07, Mikhail Khludnev wrote:
> > It handles collection (cloud) as well. But if it's a sharded and replicated
> > collection where the healthcheck file will be created?
>
> In cloud mode the existing ping handler is not very reliable.  You might
> never know which shard replica is going to actually get the ping
> request, and it's only core aware, not collection/shard/replica/node
> aware.  As a result, it has the same limitations as the DIH handler does
> in cloud mode.
>
> At one time I had an install that used the ping handler to tell haproxy
> which replicas (not using cloud OR /replication) were down either
> because of problems or because I wanted to explicitly take it out of
> rotation.  It worked really well for that.
>
> I think it's useful as-is for standalone mode, and I would hate to see
> it removed unless there is a plan to replace its functionality with
> something better.  Extending it for cloud mode should involve the
> ability to disable health check at the collection level and the node
> level, with the health check file probably living in ZK, not on the
> disk.  Disabling healthcheck would make it so SolrCloud's built-in load
> balancing would not use the disabled resource, as well as returning a
> non-200 response code from the ping handler or its replacement.
>
> Thanks,
> Shawn
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
> For additional commands, e-mail: dev-h...@solr.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
For additional commands, e-mail: dev-h...@solr.apache.org



Re: Deprecate Solr's "Ping" API?

2023-08-01 Thread Shawn Heisey

On 8/1/23 13:35, Jason Gerlowski wrote:

At one time I had an install that used the ping handler to tell haproxy
which replicas (not using cloud OR /replication) were down either
because of problems or because I wanted to explicitly take it out of
rotation.  It worked really well for that.


Curious - did you move away from this for a different approach?  Or
did the deployment itself change and make the /ping+haproxy
combination unnecessary?


I was laid off from that job in 2018.  I don't know what they did with 
it after I left.



I'm likely missing something, but that use case seems like something
that's already covered reasonably well by the "LoadBalancing" line of
SolrClients.  LB clients keep a "dynamic" record of server health
(based on previous requests/responses), and also allow users to
manually add/remove servers from rotation.  So at a glance at least it
covers the key pieces of the /ping+haproxy use case?


Do the LB clients offer the option of saying "take index copy A out of 
rotation" without touching the application?  I can't imagine that would 
be possible.  That was trivially easy with a status page that I 
developed for the Solr install as a whole -- it had a link where I could 
do "disable" or "enable" on the ping handler for each copy of the index. 
 The clients in the applications had no need to be aware of multiple 
copies of the index, because the load balancer handled all of that.


I had haproxy set up to use only copy A if it was available (and enabled 
in the ping handler), and move to copy B if not, and last ditch was copy 
C, my dev install.  I could have also had it use both copy A and B, but 
active/backup seemed like a better option.


Upgrades were pretty easy with that setup.  I could upgrade copy B 
(including a full reindex), then disable the ping handler for copy A, 
and upgrade/reindex copy A.  If I needed to test a config change, I 
could do the change on copy B or C, without affecting the live index on 
copy A.  All without SolrCloud.  I did have some ideas for switching to 
cloud, but it would have required a substantial rewrite of the indexing 
system.  If I was still working there, I very likely would have done so 
by now.


Thanks,
Shawn

-
To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
For additional commands, e-mail: dev-h...@solr.apache.org