On 3/3/2016 11:36 PM, sangs8788 wrote:
> When a commit fails, the document doesnt get cleared out from MQ and there is
> a task which runs in a background to republish the files to SOLR. If we do a
> batch commit we will not know we will end up redoing the same batch commit
> again. We currenlty ha
Hi Sangeetha,
It seems to me that you are using Solr as primary data store? If that is
true, you should not do that - you should have some other store that is
transactional and can support what you are trying to do with Solr. If
you are not using Solr as primary store, and it is critical to hav
When a commit fails, the document doesnt get cleared out from MQ and there is
a task which runs in a background to republish the files to SOLR. If we do a
batch commit we will not know we will end up redoing the same batch commit
again. We currenlty have a client side commit which issue the command
So batch them. You get a response back from Solr whether the document was
accepted. If that fail, there is a failure. What do you do then?
After every 100 docs or one minute, do a commit. Then delete the documents from
the input queue. What do you do when the commit fails?
wunder
Walter Underwo
If you need transactions, you should use a different system, like MarkLogic.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Mar 3, 2016, at 8:46 PM, sangs8788
> wrote:
>
> Hi Emir,
>
> Right now we are having only inserts into SOLR. The main rea
Hi Varun,
We dont have SOLR Cloud setup in our system. We have Master-Slave
architecture setup. In that case i dont see a way where SOLR can guarantee
whether a document got indexed/commited successfully or not.
Even thought about having a flag setup in db for whichever documents
commited to SOLR
Hi Emir,
Right now we are having only inserts into SOLR. The main reason for having
commit after each document is to get a guarantee that the document has got
indexed in solr. Until the commit status is received back the document will
not be deleted from MQ. So that even if there is a commit failu
Hi Sangeetha,
Well I don't think you need to commit after every document add.
You can rely on Solr's transaction log feature . If you are using SolrCloud
it's mandatory to have a transaction log . So every documents get written
to the tlog . Now say a node crashes even if documents were not commi
Hi Sangeetha,
What is sure is that it is not going to work - with 200-300K doc/hour,
there will be >50 commits/second, meaning there are <20ms time for
doc+commit.
You can do is let Solr handle commits and maybe use real time get to
verify doc is in Solr or do some periodic sanity checks.
Are y