Hello Kalle,
we noticed the same problem some weeks ago:
http://lucene.472066.n3.nabble.com/Share-splitting-at-23-million-documents-gt-OOM-td4085064.html
Would be interesting to hear if there is more positive feedback this time.
We finally concluded that it may be worth to start with many shards
right away. And as they grow, they can be distributed to other machines.
This works, as we have tested (yet not in production).
Regards,
Harald.
On 08.10.2013 08:43, Kalle Aaltonen wrote:
I have a test system where I have a index of 15M documents in one shard
that I would like to split in two. I've tried it four times now. I have
a stand-alone zookeeper running on the same machine.
The end result is that I have two new shards with state "construction",
and each has one replica which is down.
Two of the attempts failed because of heapspace. Now the heap size is
24GB. I can't figure out from the logs what is going on.
I've attached a log of the latest attempt. Any help would be much
appreciated.
- Kalle Aaltonen