If you want this promise and complete control, you pretty much need to do a doc per request and many parallel requests for speed.
The bulk and streaming methods of adding documents do not have a good fine grained error reporting strategy yet. It’s okay for certain use cases and and especially batch loading, and you will know when an update is rejected - it just might not be easy to know which in the batch / stream. Documents that come in batches are added as they come / are processed - not in some atomic unit. What controls how soon you will see documents or whether you will see them as they are still loading is simply when you soft commit and how many docs have been indexed when the soft commit happens. - Mark On Nov 25, 2013, at 1:03 AM, adfel70 <adfe...@gmail.com> wrote: > Hi Mark, Thanks for the answer. > > One more question though: You say that if I get a success from the update, > it’s in the system, commit or not. But when exactly do I get this feedback - > Is it one feedback per the whole request, or per one add inside the request? > I will give an example clarify my question: Say I have new empty index, and > I repeatedly send indexing requests - every request adds 500 new documents > to the index. Is it possible that in some point during this process, to > query the index and get a total of 1,030 docs total? (Lets assume there were > no indexing errors got from Solr) > > Thanks again. > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Commit-behaviour-in-SolrCloud-tp4102879p4102996.html > Sent from the Solr - User mailing list archive at Nabble.com.