Definitely. I agree. It's good to stop loading before snapshot. Anyway, doing index snapshot say every 1 hour and re-indexing documents never than last 1-1.5 hour should reduce your index recovery time.
On 8 January 2013 07:36, Otis Gospodnetic <otis.gospodne...@gmail.com>wrote: > Hi, > > Right, you can continue indexing, but if you need to run > http://master_host:port/solr/replication?command=backup on each node and > if you want a snapshot that represents a specific index state, then you > need to stop indexing (and hard commit). That's what I had in mind. But > if one just wants *some* snapshot and it doesn't matter that a snapshot on > each node is a from a slightly different time with a slightly different > index make up, so to speak, then yes, just continue indexing. > > Otis > -- > Solr & ElasticSearch Support > http://sematext.com/ > > > > > > On Mon, Jan 7, 2013 at 2:12 PM, Mark Miller <markrmil...@gmail.com> wrote: > > > You should be able to continue indexing fine - it will just keep a point > > in time snapshot around until the copy is done. So you can trigger a > backup > > at anytime to create a backup for that specific time, and keep indexing > > away, and the next night do the same thing. You will always have backed > up > > to the point in time the backup command is received. > > > > - Mark > > > > On Jan 7, 2013, at 1:45 PM, Otis Gospodnetic <otis.gospodne...@gmail.com > > > > wrote: > > > > > Hi, > > > > > > There may be a better way, but stopping indexing and then > > > using http://master_host:port/solr/replication?command=backup on each > > node > > > may do the backup trick. I'd love to see how/if others do it. > > > > > > Otis > > > -- > > > Solr & ElasticSearch Support > > > http://sematext.com/ > > > > > > > > > > > > > > > > > > On Mon, Jan 7, 2013 at 10:33 AM, LEFEBVRE Guillaume < > > > guillaume.lefeb...@cegedim.fr> wrote: > > > > > >> Hello, > > >> > > >> Using a SOLR Cloud architecture, what is the best procedure to backup > > and > > >> restore SOLR index and configuration ? > > >> > > >> Thanks, > > >> Guillaume > > >> > > >> > > > > >