When I've run an optimize with Solr 4.8.1 (by clicking optimize from the
collection overview in the admin ui) it goes replica by replica, so it is
never doing more than one shard or replica at the same time.

It also significantly slows down operations that hit the replica being
optimized. I've seen clients hanging for minutes waiting on on add document
to return.

On Fri, Nov 21, 2014 at 2:17 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: if I can optimize one shard at a time
>
> Not sure. Try putting &distrib=false on the URL, but I don't know
> for sure whether that'd work or not. If this works at all, it'll work
> on  one _replica_ at a time, not shard.
>
> Bu why would you want to? Each optimization is local and runs
> in the background anyway. Or are you running an older master/slave
> setup? In which case I guess you might want to throttle replication,
> which you can do by enabling/disabling replication with the core admin
> API.
>
> Best,
> Erick
>
> On Fri, Nov 21, 2014 at 8:53 AM, Yago Riveiro <yago.rive...@gmail.com>
> wrote:
> > It’s the "Deleted Docs” metric in the statistic core.
> >
> >
> >
> >
> > I now that eventually the merges will expunge this deletes but I will
> run out of space soon and I want to know the _real_ space that I have.
> >
> >
> >
> >
> > Actually I have space enough (about 3.5x the size of the index) to do
> the optimize.
> >
> >
> >
> >
> > Other question that I have is if I can optimize one shard at a time
> instead of do an optimize over the full collection (this give me more
> control about space used, I have more than one shard of the same collection
> in each node of the cluster).
> >
> >
> > —
> > /Yago Riveiro
> >
> > On Fri, Nov 21, 2014 at 4:29 PM, Erick Erickson <erickerick...@gmail.com
> >
> > wrote:
> >
> >> Yes, should be no problem.
> >> Although this should be happening automatically, the percentage
> >> of documents in a segment weighs quite heavily when the decision
> >> is made to merge segments in the background.
> >> You say you have "millions of deletes". Is this the difference between
> >> numDocs and maxDoc on the admin page for the core in question?
> >> Or is it just that you've issued millions of updates (or deletes)?
> Because
> >> if the latter, I'd advise monitoring the numDocs/maxDoc pair to see
> >> if the problem goes away on its own.
> >> bq: ...and need free space
> >> This is a red flag. If you're talking about disk space, before you get
> the
> >> free space forceMerge will copy the _entire_ index so you'll need at
> >> least 2x the current index size.
> >> Best,
> >> Erick
> >> On Fri, Nov 21, 2014 at 6:40 AM, yriveiro <yago.rive...@gmail.com>
> wrote:
> >>> Hi,
> >>>
> >>> It´s possible perform an optimize operation and continuing indexing
> over a
> >>> collection?
> >>>
> >>> I need to force expunge deletes from the index I have millions os
> deletes
> >>> and need free space.
> >>>
> >>>
> >>>
> >>> -----
> >>> Best regards
> >>> --
> >>> View this message in context:
> http://lucene.472066.n3.nabble.com/Optimize-during-indexing-tp4170261.html
> >>> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to