Yup - nothing about it will be automatic or easy - multi dc is not really a
current feature. I'm just saying it's a fast way to move the data. If you setup
the same cluster on each side though, the appropriate stuff will be in
ZooKeeper.
- Mark
On Feb 28, 2013, at 9:04 PM, varun srivastava wr
"You can replicate from a SolrCloud node still. Just hit it's replication
handler and pass in the master url to replicate to"
How will this work ? lets say s1dc1 is master of s1dc2 , s2dc1 is master
for s2dc2 .. so after hitting replicate index binary will get copied but
then how appropriate entri
On Feb 28, 2013, at 6:20 PM, varun srivastava wrote:
> So we need way of indexing 1 dc
> and then somehow quickly propagate the index binary to others.
You can replicate from a SolrCloud node still. Just hit it's replication
handler and pass in the master url to replicate to. It doesn't have a
On 2/28/2013 4:20 PM, varun srivastava wrote:
We have 10 virtual data centres . Now its setup like this because we do
rolling update. While 1 st dc is getting indexed other 9 serve traffic .
Indexing one dc take 2 hours. Now with single shard we use to index one dc
and then quickly replicate inde
Any thought on this ?
We have 10 virtual data centres . Now its setup like this because we do
rolling update. While 1 st dc is getting indexed other 9 serve traffic .
Indexing one dc take 2 hours. Now with single shard we use to index one dc
and then quickly replicate index into other dcs by havin
How can I setup cloud master-slave ? Can you point me to any sample config
or tutorial which describe the steps to get slor cloud in master-slave
setup.
As you know from my previous mails, that I dont need active solr replicas,
I just need a mechanism to copy a given solr cloud index to a new inst
On Feb 26, 2013, at 6:49 PM, varun srivastava wrote:
> So does it means while doing "document add" the state of cluster is fetched
> from zookeeper and then depending upon hash of docid the target shard is
> decided ?
We keep the zookeeper info cached locally. We only updated it when ZooKeeper
To update at least one node must be up for each shard,
otherwise updates fail.
Solr replication works fine in 4.x, in fact it's used to synchronize
when bulk updates happen (say you bring up a new node).
The transaction logs are only used to store at least 100 currently
documents for synchronizing
So does it means while doing "document add" the state of cluster is fetched
from zookeeper and then depending upon hash of docid the target shard is
decided ?
Assume we have 3 shards ( with no replicas) in which 1 went down while
indexing , so will all the documents will be routed to remaining 2 s
ZooKeeper
/
/clusterstate.json - info about the layout and state of the cluster -
collections, shards, urls, etc
/collections - config to use for the collection, shard leader voting zk nodes
/configs - sets of config files
/live_nodes - ephemeral nodes, one per Solr node
/overseer - work queu
Hi Mark,
One more question
While doing solr doc update/add what information is required from zookeeper
? Can you tell what all information is stored in zookeeper other than the
startup configs.
Thanks
Varun
On Tue, Feb 26, 2013 at 3:09 PM, Mark Miller wrote:
>
> On Feb 26, 2013, at 5:25 PM, v
On Feb 26, 2013, at 5:25 PM, varun srivastava wrote:
> Hi All,
> I have some questions regarding role of zookeeper in solrcloud runtime,
> while processing the queries .
>
> 1) Is zookeeper cluster referred by solr shards for processing every
> request, or its only used to copy config on startu
12 matches
Mail list logo