I make a test at my SolrCloud. I try to send 100 millions documents into my
node which has no replica via Hadoop. When document count send to that node
is around 30 millions, RAM usage of my machine becomes 99% (Solr Heap Usage
is not 99%, it uses just 3GB - 4GB of RAM). After a time later my node
stops to receiving documents to index and the Indexer Job fails as well.

How can I force to clean OS cache (if it is OS cache that blocks) me or
what should I do (maybe sending 10 million documents and waiting a little
etc.) What fellows do at heavy indexing situations?

Reply via email to