that if you couldn't load
>> the
>> > snapshot into Z, odds are Z isn't responding to queries either.
>> >
>> > a better course of action might just be to have an automated system
>> which
>> > monitors the distribution status info on the mas
Thanks, guys.
Glad to know the scripts work very well in your experience. (well, indeed
they are quite simple.) So that's how I imagine we should do it except that
you guys added a very good point -- that the monitoring system can invoke a
script to take the slave out of the load balancer. I'd li
If snapinstaller fails to install the lastest snapshot, then chances are
that it would be able to install any earlier snapshots as well. All it does
is some very simple filesystem operations and then invoke the Solr server to
do a commit. I agree with Chris that the best thing to do is to take it
: So looks like all we can do is it monitoring the logs and alarm people to
: fix the issue and rerun the scripts, etc. whenever failures occur. Is that
: the correct understanding?
I have *never* seen snappuller or snapinstaller fail (except during an
initial rollout of Solr when i forgot to set
Hi, there,
We want to use Solr's Collection Distribution. Here's the question regarding
recovery of failures of the scripts. To my understanding:
* if the snapuller fails on a slave, we can possibly implement something
like the master would examine the status messages from all slaves and notify