You can either use a dedicated rsync port for each instance or hack the existing scripts to support multiple rsync modules. Both ways should work.
Bill On Tue, Jul 1, 2008 at 3:49 AM, Jacob Singh <[EMAIL PROTECTED]> wrote: > Hi Bill and Others: > > > Bill Au wrote: > > The rsyncd-start scripts gets the data_dir path from the command line and > > create a rsyncd.conf on the fly exporting the path as the rsync module > named > > "solr". The salves need the data_dir path on the master to look for the > > latest snapshot. But the rsync command used by the slaves relies on the > > rsync module name "solr" to do the file transfer using rsyncd. > > So is the answer that replication simply won't work for multiple > instances unless I have a dedicated port for each one? > > Or is the answer that I have to hack the existing scripts? > > I'm a little confused when you say that slave needs to know the master's > data dir, but, no matter what it sends, it needs to match the one known > by the master when it starts rsyncd... > > Sorry if my questions are newbie, I've not actually used rsyncd, but > I've read up quite a bit now. > > Thanks, > Jacob > > > > > Bill > > > > On Tue, Jun 10, 2008 at 4:24 AM, Jacob Singh <[EMAIL PROTECTED]> > wrote: > > > >> Hey folks, > >> > >> I'm messing around with running multiple indexes on the same server > >> using Jetty contexts. I've got the running groovy thanks to the > >> tutorial on the wiki, however I'm a little confused how the collection > >> distribution stuff will work for replication. > >> > >> The rsyncd-enable command is simple enough, but the rsyncd-start command > >> takes a -d (data dir) as an argument... Since I'm hosting 4 different > >> instances, all with their own data dirs, how do I do this? > >> > >> Also, you have to specify the master data dir when you are connecting > >> from the slave anyway, so why does it need to be specified when I start > >> the daemon? If I just start it with any old data dir will it work for > >> anything the user running it has perms on? > >> > >> Thanks, > >> Jacob > >> > > > >