Unfortunately I don't quite know the internals of this code well. I
vaguely remember
a problem with insuring that deletes were handled correctly, so this may be a
manifestation of that fix. As I remember optimistic locking is mixed
up in this too.

But all that means is that I really can't answer your question, I'll
have to leave that
to people more familiar with the code.

Best
Erick

On Thu, May 23, 2013 at 9:30 AM, AlexeyK <lex.kudi...@gmail.com> wrote:
> the hard commit is set to about 20 minutes, while ram buffer is 256Mb.
> We will add more frequent hard commits without refreshing the searcher, that
> for the tip.
>
> from what I understood from the code, for each 'add' command there is a test
> for a 'delete by query'. if there is an older dbq, it's run after the 'add'
> operation if its version > 'add' version.
> in my case, there are a lot of documents to be inserted, and a single large
> DBQ. My question is: shouldn't this be done in bulks? Why is it necessary to
> run the DBQ after each insertion? Supposedly there are 1000 insertions it's
> run 1000 times.
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549p4065628.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to