This is sounding like an XY problem. What are you measuring
when you say RAM usage is 99%? is this virtual memory? See:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

What errors are you seeing when you say: "my node stops to receiving
documents"?

How are you sending 10M documents? All at once in a huge packet
or some smaller number at a time? From where? How?

And what does Hadoop have to do with anything? Are you putting
the Solr index on Hadoop? How? The recent contrib?

In short, you haven't provided very many details. You've been around
long enough that I'm surprised you're saying "it doesn't work, how can
I fix it?" without providing much in the way of details to help us help
you.

Best
Erick



On Sat, Aug 24, 2013 at 1:52 PM, Furkan KAMACI <furkankam...@gmail.com>wrote:

> I make a test at my SolrCloud. I try to send 100 millions documents into my
> node which has no replica via Hadoop. When document count send to that node
> is around 30 millions, RAM usage of my machine becomes 99% (Solr Heap Usage
> is not 99%, it uses just 3GB - 4GB of RAM). After a time later my node
> stops to receiving documents to index and the Indexer Job fails as well.
>
> How can I force to clean OS cache (if it is OS cache that blocks) me or
> what should I do (maybe sending 10 million documents and waiting a little
> etc.) What fellows do at heavy indexing situations?
>

Reply via email to