Thanks Shawn, I'm using ubuntu and I'll try rsync command. Unfortunately I'm using one replication factor but I think the downtime will be less than five minutes after following your steps.
But how can I start Solr backup or why should I run it although I copied the index and changed theo path? And what do you mean with "Using multiple passes with rsync"? Thanks, Mahmoud On Tuesday, August 1, 2017, Shawn Heisey <apa...@elyograg.org> wrote: > On 7/31/2017 12:28 PM, Mahmoud Almokadem wrote: > > I've a SolrCloud of four instances on Amazon and the EBS volumes that > > contain the data on everynode is going to be full, unfortunately Amazon > > doesn't support expanding the EBS. So, I'll attach larger EBS volumes to > > move the index to. > > > > I can stop the updates on the index, but I'm afraid to use "cp" command > to > > copy the files that are "on merge" operation. > > > > The copy operation may take several hours. > > > > How can I move the data directory without stopping the instance? > > Use rsync to do the copy. Do an initial copy while Solr is running, > then do a second copy, which should be pretty fast because rsync will > see the data from the first copy. Then shut Solr down and do a third > rsync which will only copy a VERY small changeset. Reconfigure Solr > and/or the OS to use the new location, and start Solr back up. Because > you mentioned "cp" I am assuming that you're NOT on Windows, and that > the OS will most likely allow you to do anything you need with index > files while Solr has them open. > > If you have set up your replicas with SolrCloud properly, then your > collections will not go offline when one Solr instance is shut down, and > that instance will be brought back into sync with the rest of the > cluster when it starts back up. Using multiple passes with rsync should > mean that Solr will not need to be shutdown for very long. > > The options I typically use for this kind of copy with rsync are "-avH > --delete". I would recommend that you research rsync options so that > you fully understand what I have suggested. > > Thanks, > Shawn > >