Hi Ryan, thanks for that!
I have one outstanding question: when I take a snapshot on the master,
snappull and snapinstall on the slave, the new index is not being
used: restarting the slave server does pick up the changes, however.
Has anyone else had this problem with recent development builds?
In case anyone is trying to do multicore replication, here some of the
things I've done to get it working.. These could go on the wiki
somewhere, what do people think?
Obviously, have as much shared configuration as possible is ideal. On
the master, I have core-specific:
- scripts.conf, for webapp_name, master_data_dir and master_status_dir
- solrconfig.xml, for the post-commit and post-optimise snapshooter
locations
On the slave, I have core-specific:
-scripts.conf, as above
I've also customised snappuller to accept a different rsync module
name (hard coded to 'solr' at present). This module name is set in the
slave scripts.conf
James
On 29 Apr 2008, at 13:44, Ryan McKinley wrote:
On Apr 29, 2008, at 3:09 PM, James Brady wrote:
Hi all,
I'm aiming to use the new multicore features in development
versions of Solr. My ideal setup would be to have master / slave
servers on the same machine, snapshotting across from the 'write'
to the 'read' server at intervals.
This was all fine with Solr 1.2, but the rsync & snappuller
configuration doesn't seem to be set up to allow for multicore
replication in 1.3.
The rsyncd.conf file allows for several data directories to be
defined, but the snappuller script only handles a single directory,
expecting the Lucene index to be directly inside that directory.
What's the best practice / best suggestions for replicating a
multicore update server out to search servers?
Currently, for multicore replication you will need to install the
snap* scripts for _each_ core. The scripts all expect a single core
so for multiple cores, you will just need to install it multiple
times.
ryan