Hi,
I am using Lucidworks Fusion the interface for Solr and I would like to
automate some of the processes we have at the company such as add business
rules (boosting/ redirecting/ facets.. etc) using Python.
I have seen few docs related to Solr API for Python such as pySolr however,
it not clear
There are a variety of ways you could do it.
The easiest short term change is to simply modify what handles most zk
retries - the ZkCmdExecutor - already plugged into SolrZkClient where it
retries. It tries to guess when a session times out and does fall back
retries up to that point.
Because the
>
> I would hope there are few developers doing cloud work that don’t
understand the lazy local cluster state - it’s entirely fundamental to
everything.
The busy waiting, I would less surprised if someone didn’t understand, but
as far as I’m concerned they are bugs too. It’s an event driven system
David’s issue and my response are referring to the number of zk servers in
the zk cluster. His issue requires more than one zk server. The tests have
always used 1.
Yes the whole system is supposed to work fine with a stale local cache of
what’s in zk. That is the design. When that doesn’t work,
It's been >72h since the vote was initiated and the result is:
+1 7 (6 binding)
0 0
-1 0
This vote has PASSED
-- Forwarded message -
From: Houston Putman
Date: Sat, Sep 25, 2021 at 9:50 AM
Subject: Re: [VOTE] Release Lucene/Solr 8.10.0 RC1
To: Solr/Lucene Dev
SUCCESS! [1
I don't know for the fix to this specific test, but the way cluster state
is maintained on a node does not depend on how many ZK nodes there are.
When a node does an action against ZK, it does its write to ZK.
When it needs to read, it reads from its local cache.
The local cache of the node is upd