I first wrote the “fall back to one at a time” code for Solr 1.3.
It is pretty easy if you plan for it. Make the batch size variable. When a
batch fails, retry with a batch size of 1 for that particular batch. Then keep
going or fail, either way, you have good logging on which one failed.
wunde
Steven's solution is a very common one, complete to the
notion of re-chunking. Depending on the throughput requirements,
simply resending the offending packet one at a time is often
sufficient (but not _efficient). I can imagine fallback scenarios
like "try chunking 100 at a time, for those chunks
For my application, the solution I implemented is I log the chunk that
failed into a file. This file is than post processed one record at a
time. The ones that fail, are reported to the admin and never looked at
again until the admin takes action. This is not the most efficient
solution right no
Thanks Erik. How do people handle this scenario? Right now the only option
I can think of is to replay the entire batch by doing add for every single
doc. Then this will give me error for all the docs which got added from the
batch.
On Tue, Feb 9, 2016 at 10:57 PM, Erick Erickson
wrote:
> This h
This has been a long standing issue, Hoss is doing some current work on it see:
https://issues.apache.org/jira/browse/SOLR-445
But the short form is "no, not yet".
Best,
Erick
On Tue, Feb 9, 2016 at 8:19 AM, Debraj Manna wrote:
> Hi,
>
>
>
> I have a Document Centric Versioning Constraints adde
Hi,
I have a Document Centric Versioning Constraints added in solr schema:-
false
doc_version
I am adding multiple documents in solr in a single call using SolrJ 5.2.
The code fragment looks something like below :-
try {
UpdateResponse resp = solrClient.add(docs.getDocCollectio