: Thanks for the reply Asif. We have already tried removing the optimization
: step. Unfortunately the commit command alone is also causing an identical
: behaviour . Is there any thing else that we are missing ?

the hardlinking behavior of snapshots is based on the files in the index 
directory, and the files in the index directory are based on the current 
segments of your index -- so if you make enough changes to your index to 
cause all of hte segments to change every snapshot will be different.

optimizing garuntees you that every segment will be different (because all 
the old segment are gone, and a new segment is created) but if your merge 
settings are set to be really aggressive, then it's euqally possible that 
some number of delete/add calls will also cause every segment to be 
replaced.

without your configs, and directory listings of subsequent snapshots, it's 
hard to guess what the problem might be (if you already stoped optimizing 
on every batch)

But i think we have an XY problem here...

: >> This process continues for around 160,000 documents i.e. 800 times and by
: >> the end of it we have 800 snapshots.

Why do you keep 800 snapshots?

you really only need snapshots arround long enough to ensure that a slave 
isn't snappulling in hte middle of deleteing it ... unless you have some 
really funky usecase where you want some of your query boxes to 
deliberately fetch old versions of hte index, you odn't really need more 
then couple of snapshots at one time.

it can be prudent to keep more snapshots then you "need" arround in case 
of logical index corruption (ie: someone foolishly deletes a bunch of 
docs they shouldn't have) because snapshots are *usually* more disk 
space efficient then full backup copies -- but if you are finding that 
that's not hte case, why bother keeping them?


-Hoss

Reply via email to