On 12/21/2016 11:15 PM, 苗海泉 wrote: > I use the solr is 6.0 version, the solrj is 6.0 version, using > SolrCloud mode deployment, in my code did not make an explicit commit, > configure the autoCommit and softAutoCommit, using the > ConcurrentUpdateSolrClient class. > > When we send 100 million data, often read timeout exception occurred > in this anomaly, the data is lost. I would like to ask a few > questions: > 1, ConcurrentUpdateSolrClient.add time, if not thrown on behalf of the > data is not an exception has been successfully sent to the solr, this > time is the synchronization of Well, that is, solr server to accept > the data written to the log before we return?
I can't decipher what you're asking here. > 2, if the answer to question 1 is no, then how do we determine > ConcurrentUpdateSolrClient.add implementation failure, so that we have > the wrong data retransmission processing. ConcurrentUpdateSolrClient will *never* inform you about exceptions that occur related to the "add" calls you make. Those calls will return to the code immediately and the actual adds are done in the background. Errors that occur will be logged, but no exceptions will make it back to your code. All of your Solr servers could be completely down, and the "add" calls will show no errors at all. Use HttpSolrClient or CloudSolrClient as appropriate, like Erick mentioned. If you want multi-threaded indexing *and* error detection, you'll have to write the multi-threading yourself. Thanks, Shawn