Thanks for the reply.
In my case, the order is definitely critical. It would be great if this can
be fixed. And yes, even SolrJ deals with deletes first and then the
add/updates. And that was the reason why I switched from SolrJ to plain
http.
There is a ticket with SolrJ as well
https://issues.ap
It's because of how we currently handle batched requests - we buffer a
different number of deletes thqn we do adds and flush them separately - mainly
because the size of each is likely to be so different, at one point we would
buffer a lot more deletes.
So currently, you want to break these up
Also, I was referring to this wiki page:
http://wiki.apache.org/solr/UpdateJSON#Update_Commands
Thanks
Vinay
On Tue, Feb 19, 2013 at 6:12 PM, Vinay Pothnis wrote:
> Thanks for the reply Eric.
>
> * I am not using SolrJ
> * I am using plain http (apache http client) to send a batch of commands.
Thanks for the reply Eric.
* I am not using SolrJ
* I am using plain http (apache http client) to send a batch of commands.
* As I mentioned below, the json payload I am sending is like this (some of
the fields have been removed for brevity)
* POST http://localhost:8983/solr/sample/update
* POST B
Hmmm, this would surprise me unless the add and delete were going to
separate machines. how are you sending them? SolrJ? and in a single
server.add(doclist) format or with individual adds?
Individual commands being sent can come 'round out of sequence, that's what
the whole optimistic locking bit
Hello,
I have the following set up:
* solr cloud 4.1.0
* 2 shards with embedded zookeeper
* plain http to communicate with solr
I am testing a scenario where i am batching multiple commands and sending
to solr. Since this is the solr cloud setup, I am always sending the
updates to one of the nod