On 1/10/2015 11:46 PM, ig01 wrote:
> Thank you all for your response,
> The thing is that we have 180G index while half of it are deleted documents.
> We  tried to run an optimization in order to shrink index size but it
> crashes on ‘out of memory’ when the process reaches 120G.   
> Is it possible to optimize parts of the index? 
> Please advise what can we do in this situation.

If you are getting "OutOfMemoryError" exceptions from Java, that means
your heap isn't large enough to accomplish what you have asked the
program to do (between the configuration and what you have actually
requested).  You'll either need to allocate more memory to the heap, or
you need to change your config so less memory is required.

I see from a later reply that the 120GB size you have mentioned is your
Java heap.  Unless you've got hundreds of millions of documents on one
Solr instance/server (which would not be a good idea) and/or a serious
misconfiguration, I cannot imagine needing a heap that big for Solr.

The largest index on my dev Solr server has 98 million documents in
seven shards, with a total index size a little over 120GB (six shards
each 20GB and a seventh shard that's less than 1GB), and my heap size is
7 gigabytes.  There is a smaller index as well with 17 million docs in
three shards, that one is about 10GB on disk.  Unlike the production
servers, the dev server has all the index data contained on one server.

Here's a wiki page that covers things which cause large heap
requirements.  A later section also describes steps you can take to
reduce memory usage.

https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

How many documents do you have on a single Solr server?  Can you use a
site like http://apaste.info to share your solrconfig.xml?  I don't know
if we'll need the schema, but it might be a good idea to share that as well.

Thanks,
Shawn

Reply via email to