bq. But I would assume it should still be ok. The number of watchers
should still not be gigantic.
This assumption would need to be rigorously tested before I'd be
comfortable. I've spent quite a
bit of time with unhappy clients chasing down issues in the field where
1> it takes hours to cold-sta
I opened now https://issues.apache.org/jira/browse/SOLR-13239 for the
problem I observed.
Well, who can really be sure about those things. But I would assume it
should still be ok. The number of watchers should still not be gigantic.
I have setups with about 2000 collections each but far less
Jason's comments are exactly why there _is_ a state.json per
collection rather than the single clusterstate.json in the original
implementation.
Hendrik:
yes, please do open a JIRA for the condition you observed,
especially if you can point to the suspect code. There have
been intermittent issues
Hi Jason,
thanks for your answer. Yes, you would need one watch per state.json and
thus one watch per collection. That should however not really be a
problem with ZK. I would assume that the Solr server instances need to
monitor those nodes to be up to date on the cluster state. Using
org.apa
Hi Henrik,
I'll try to answer, and let others correct me if I stray. I wasn't
around when CloudSolrClient was written, so take this with a grain of
salt:
"Why does the client need that timeout?Wouldn't it make sense to
use a watch?"
You could probably write a CloudSolrClient that uses watch
Hi,
when I perform a query using the CloudSolrClient the code first
retrieves the DocCollection to determine to which instance the query
should be send [1]. getDocCollection [2] does a lookup in a cache, which
has a 60s expiration time [3]. When a DocCollection has to be reloaded
this is guar