Hmmm, when you say "When we send 100 million data", _how_ are you sending it? All at once? And is the read timeout on the client or in the server logs?
What I suspect is happening is that Solr is too busy to promptly read the entire packet you're sending. This could be due to several things: - you're sending too much data at once. I generally send 1,000 docs in a packet. - Your Solr instance may just be too busy. There are lots of ways it could be "too busy" -- You're telling ConcurrentUpdateSolrClient (CUSC) to use a large number of threads. -- Solr is spending a lot of resources doing garbage collection etc. -- You're running other processes on the Solr box. -- Monitor your Solr server(s) to see the CPU consumption and if you have it CPU bound. If you're getting a read timeout, then there's no guarantee that Solr has even received your data, thus no way to determine what has been indexed, written to the tlog etc. By the way, if you're using SolrCloud, I recommend CloudSolrClient instead. Since you're using SolrJ, you can spin up multiple threads if you need the increased throughput. Best, Erick On Wed, Dec 21, 2016 at 10:15 PM, 苗海泉 <mseaspr...@gmail.com> wrote: > I use the solr is 6.0 version, the solrj is 6.0 version, using > SolrCloud mode deployment, in my code did not make an explicit commit, > configure the autoCommit and softAutoCommit, using the > ConcurrentUpdateSolrClient class. > > When we send 100 million data, often read timeout exception occurred > in this anomaly, the data is lost. I would like to ask a few > questions: > 1, ConcurrentUpdateSolrClient.add time, if not thrown on behalf of the > data is not an exception has been successfully sent to the solr, this > time is the synchronization of Well, that is, solr server to accept > the data written to the log before we return? > 2, if the answer to question 1 is no, then how do we determine > ConcurrentUpdateSolrClient.add implementation failure, so that we have > the wrong data retransmission processing. > 3, there is no use ConcurrentUpdateSolrClient > Thank you!