Missed an important detail. It works correctly for single shard
collections.
--
Sachin
On Wed, Apr 22, 2020 at 10:03 PM Sachin Divekar wrote:
> Hi all,
>
> I am facing the exact same issue reported
> https://issues.apache.org/jira/browse/SOLR-8733 and
> https://issues.apache.
Hi all,
I am facing the exact same issue reported
https://issues.apache.org/jira/browse/SOLR-8733 and
https://issues.apache.org/jira/browse/SOLR-7404
I have tried it with Solr v8.4.1 and v8.5.1. In both cases, the cluster
consisted of three nodes and a collection with 3 shards and 2 replicas.
Fo
started thinking about it. If anybody has any suggestions
please let me know.
thanks
Sachin
On Thu, Apr 9, 2020 at 8:13 PM Sachin Divekar wrote:
> Hi,
>
> We run a SaaS and have a Solr Cloud setup in our cloud. We are developing
> a client-side application. We want to have a local copy of
Hi,
We run a SaaS and have a Solr Cloud setup in our cloud. We are developing a
client-side application. We want to have a local copy of the client's
documents stored in Solr. Here, the client's documents are identified from
a particular field in the document e.g. client_id.
I was searching for s
Thanks, Chris.
I think I should stop thinking about doing it in Solr. Anyway, I was just
trying to see how far I can go.
On Wed, Mar 4, 2020 at 11:50 PM Chris Hostetter
wrote:
>
> : So, I thought it can be simplified by moving this state transitions and
> : processing logic into Solr by writing
g there.
>
> I do wonder if it’s possible to insure that a given doc is always updated
> from the same thread? I’m assuming that the root of your issue is that
> you’re pushing updates in parallel and the same doc is being updated from
> two different places.
>
> Best,
> Erick
&g
lem
>
> Your question appears to be an "XY Problem" ... that is: you are dealing
> with "X", you are assuming "Y" will help you, and you are asking about "Y"
> without giving more details about the "X" so that we can understand the
&g
Thank, Erick.
I think I was not clear enough. With the custom update processor, I'm not
using optimistic concurrency at all. The update processor just modifies the
incoming document with updated field values and atomic update instructions.
It then forwards the modified request further in the chain
Hi,
We are using Solr where there are many update operations. This may not be
the right use case for Solr but it's an old application and at this moment
we are in no mood to replace Solr with something else.
For one of our use case, we had to use optimistic concurrency for handling
concurrent upd
nt/solr-mapping-processor
>
> Jan
>
> > 24. feb. 2020 kl. 14:03 skrev Sachin Divekar :
> >
> > Hi,
> >
> > I am developing a custom update processor. I am using
> solr-common.jar:1.3.0
> > which I found on Maven.
> >
> > I am studying the code
Hi,
I am developing a custom update processor. I am using solr-common.jar:1.3.0
which I found on Maven.
I am studying the code in Solr repo. I found there are many methods
available in src/java/org/apache/solr/common/SolrInputDocument.java which
are not available for me after importing solr-commo
Hi,
I am trying to use *must-exist* and *must-not-exist* semantics of
optimistic concurrency provided by Solr. When doing batch updates SolrM
stops indexing immediately when it encounters a conflict. It does not
process subsequent records in the input list.
That is one extreme. And the other extr
12 matches
Mail list logo