Hi

We had some cases with customers (Solr 5.3.1, one search node, one shard) with 
huge tlog files (more than 1 GB).

Our settings:

<updateHandler class="solr.DirectUpdateHandler2">
<autoCommit>
                                <maxDocs>10000</maxDocs>
                                <maxTime>30000</maxTime> <!-- 30 seconds -->
                                <openSearcher>false</openSearcher> <!-- don't 
open a new searcher -->
                </autoCommit>

                <autoSoftCommit>
                                <maxTime>1800000</maxTime> <!-- 30 minutes -->
</autoSoftCommit>

                <updateLog>
                                <str name="dir">${solr.data.dir:}</str>
                </updateLog>
  </updateHandler>

I don't have enough logs so I don't know if commit failed or not. I just 
remember there were OOM messages.

As you may know, during restart, Solr tries to replay from tlog. It may take a 
lot of time. I tried to move the files to other location, started Solr and only 
after the core was loaded, I moved tlog back to their original location. They 
were cleared after a while.

So I have few questions:

  1.  Do you have any idea for commit failures?
  2.  Should we decrease the maxTime for hard commit or any other settings?
  3.  Is there any way to replay tlog asynchronously (or disable it, so we will 
be able to call it programmatically from our code in a separate thread), so 
Solr will be loaded more quickly?
  4.  Is there any improvement in Solr 7.3.1?

Thanks in advance

Avi


________________________________
This email and any attachments thereto may contain private, confidential, and 
privileged material for the sole use of the intended recipient. Any review, 
copying, or distribution of this email (or any attachments thereto) by others 
is strictly prohibited. If you are not the intended recipient, please contact 
the sender immediately and permanently delete the original and any copies of 
this email and any attachments thereto.

Reply via email to