Could you please provide some inputs/thoughts on how we can decide on the
configuration ?
Thanks
Sangeetha
--
View this message in context:
http://lucene.472066.n3.nabble.com/Deciding-on-Solr-Nodes-and-Configuration-tp4261581p4262042.html
Sent from the Solr - User mailing list archive at Nabbl
When a commit fails, the document doesnt get cleared out from MQ and there is
a task which runs in a background to republish the files to SOLR. If we do a
batch commit we will not know we will end up redoing the same batch commit
again. We currenlty have a client side commit which issue the command
There will be 16 MQs which will be send documents to SOLR Servers. Below is
our expectation,
Expected writes per month - 50 Million (inserts only)
Size of each document - 10 KB to 70KB
Expected reads per month - 10 per month
In terms of highest hourly rate Reads - 2000/hour
In terms of high
Hi Varun,
We dont have SOLR Cloud setup in our system. We have Master-Slave
architecture setup. In that case i dont see a way where SOLR can guarantee
whether a document got indexed/commited successfully or not.
Even thought about having a flag setup in db for whichever documents
commited to SOLR
Hi Emir,
Right now we are having only inserts into SOLR. The main reason for having
commit after each document is to get a guarantee that the document has got
indexed in solr. Until the commit status is received back the document will
not be deleted from MQ. So that even if there is a commit failu
I just want to index only certain documents and there will not be any update
happening on the indexed document.
In our existing system we already have DIH implemented which indexes
document from sql server (As you said based on last index time). In this
case the metadata is there available in dat